Jan 26 12:43:41 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 12:43:42 crc restorecon[4539]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:43:42 crc restorecon[4539]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:43:43 crc restorecon[4539]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:43:43 crc restorecon[4539]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:43:43 crc restorecon[4539]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:43:43 crc restorecon[4539]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:43:43 crc restorecon[4539]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:43:43 crc restorecon[4539]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 12:43:43 crc restorecon[4539]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 12:43:43 crc kubenswrapper[4844]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 12:43:43 crc kubenswrapper[4844]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 12:43:43 crc kubenswrapper[4844]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 12:43:43 crc kubenswrapper[4844]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 12:43:43 crc kubenswrapper[4844]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 12:43:43 crc kubenswrapper[4844]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.180167 4844 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183014 4844 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183029 4844 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183035 4844 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183039 4844 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183043 4844 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183048 4844 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183052 4844 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183056 4844 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183059 4844 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183063 4844 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183067 4844 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183076 4844 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183080 4844 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183084 4844 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183088 4844 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183092 4844 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183095 4844 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183099 4844 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183102 4844 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183106 4844 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183110 4844 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183113 4844 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183117 4844 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183120 4844 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183124 4844 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183135 4844 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183139 4844 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183143 4844 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183146 4844 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183150 4844 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183153 4844 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183157 4844 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183162 4844 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183167 4844 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183171 4844 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183175 4844 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183178 4844 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183182 4844 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183186 4844 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183191 4844 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183195 4844 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183198 4844 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183202 4844 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183206 4844 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183209 4844 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183213 4844 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183216 4844 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183220 4844 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183224 4844 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183227 4844 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183231 4844 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183234 4844 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183237 4844 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183241 4844 feature_gate.go:330] unrecognized feature gate: Example Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183245 4844 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183248 4844 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183252 4844 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183255 4844 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183258 4844 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183262 4844 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183267 4844 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183273 4844 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183279 4844 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183285 4844 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183291 4844 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183296 4844 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183300 4844 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183305 4844 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183309 4844 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183314 4844 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.183321 4844 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183414 4844 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183423 4844 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183429 4844 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183435 4844 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183440 4844 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183444 4844 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183449 4844 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183455 4844 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183459 4844 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183463 4844 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183467 4844 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183479 4844 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183485 4844 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183489 4844 flags.go:64] FLAG: --cgroup-root="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183493 4844 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183497 4844 flags.go:64] FLAG: --client-ca-file="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183501 4844 flags.go:64] FLAG: --cloud-config="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183505 4844 flags.go:64] FLAG: --cloud-provider="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183509 4844 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183514 4844 flags.go:64] FLAG: --cluster-domain="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183518 4844 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183522 4844 flags.go:64] FLAG: --config-dir="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183526 4844 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183531 4844 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183536 4844 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183540 4844 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183545 4844 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183549 4844 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183553 4844 flags.go:64] FLAG: --contention-profiling="false" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183558 4844 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183563 4844 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183572 4844 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183582 4844 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183590 4844 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183612 4844 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183618 4844 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183622 4844 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183627 4844 flags.go:64] FLAG: --enable-server="true" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183631 4844 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183638 4844 flags.go:64] FLAG: --event-burst="100" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183643 4844 flags.go:64] FLAG: --event-qps="50" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183648 4844 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183653 4844 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183658 4844 flags.go:64] FLAG: --eviction-hard="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183665 4844 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183670 4844 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183675 4844 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183680 4844 flags.go:64] FLAG: --eviction-soft="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183687 4844 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183692 4844 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183696 4844 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183701 4844 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183705 4844 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183709 4844 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183712 4844 flags.go:64] FLAG: --feature-gates="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183717 4844 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183722 4844 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183726 4844 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183730 4844 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183735 4844 flags.go:64] FLAG: --healthz-port="10248" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183739 4844 flags.go:64] FLAG: --help="false" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183743 4844 flags.go:64] FLAG: --hostname-override="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183747 4844 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183751 4844 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183757 4844 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183761 4844 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183765 4844 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183769 4844 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183773 4844 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183778 4844 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183782 4844 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183786 4844 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183790 4844 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183794 4844 flags.go:64] FLAG: --kube-reserved="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183798 4844 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183802 4844 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183806 4844 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183810 4844 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183814 4844 flags.go:64] FLAG: --lock-file="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183818 4844 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183822 4844 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183826 4844 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183832 4844 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183836 4844 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183840 4844 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183844 4844 flags.go:64] FLAG: --logging-format="text" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183848 4844 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183852 4844 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183857 4844 flags.go:64] FLAG: --manifest-url="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183860 4844 flags.go:64] FLAG: --manifest-url-header="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183866 4844 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183870 4844 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183875 4844 flags.go:64] FLAG: --max-pods="110" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183879 4844 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183883 4844 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183888 4844 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183892 4844 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183896 4844 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183900 4844 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183904 4844 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183914 4844 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183918 4844 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183922 4844 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183926 4844 flags.go:64] FLAG: --pod-cidr="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183930 4844 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183936 4844 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183940 4844 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183944 4844 flags.go:64] FLAG: --pods-per-core="0" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183948 4844 flags.go:64] FLAG: --port="10250" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183952 4844 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183956 4844 flags.go:64] FLAG: --provider-id="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183960 4844 flags.go:64] FLAG: --qos-reserved="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183964 4844 flags.go:64] FLAG: --read-only-port="10255" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183968 4844 flags.go:64] FLAG: --register-node="true" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183972 4844 flags.go:64] FLAG: --register-schedulable="true" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183976 4844 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183983 4844 flags.go:64] FLAG: --registry-burst="10" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183986 4844 flags.go:64] FLAG: --registry-qps="5" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183990 4844 flags.go:64] FLAG: --reserved-cpus="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.183994 4844 flags.go:64] FLAG: --reserved-memory="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184000 4844 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184004 4844 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184008 4844 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184013 4844 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184017 4844 flags.go:64] FLAG: --runonce="false" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184021 4844 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184025 4844 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184029 4844 flags.go:64] FLAG: --seccomp-default="false" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184034 4844 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184038 4844 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184043 4844 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184047 4844 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184051 4844 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184055 4844 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184059 4844 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184063 4844 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184067 4844 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184071 4844 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184075 4844 flags.go:64] FLAG: --system-cgroups="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184079 4844 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184085 4844 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184089 4844 flags.go:64] FLAG: --tls-cert-file="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184093 4844 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184098 4844 flags.go:64] FLAG: --tls-min-version="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184103 4844 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184107 4844 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184111 4844 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184115 4844 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184119 4844 flags.go:64] FLAG: --v="2" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184124 4844 flags.go:64] FLAG: --version="false" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184129 4844 flags.go:64] FLAG: --vmodule="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184134 4844 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184138 4844 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184236 4844 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184240 4844 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184245 4844 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184249 4844 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184253 4844 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184257 4844 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184260 4844 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184270 4844 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184274 4844 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184277 4844 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184281 4844 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184284 4844 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184288 4844 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184292 4844 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184295 4844 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184298 4844 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184302 4844 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184306 4844 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184309 4844 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184313 4844 feature_gate.go:330] unrecognized feature gate: Example Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184316 4844 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184320 4844 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184324 4844 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184328 4844 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184331 4844 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184335 4844 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184338 4844 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184342 4844 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184345 4844 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184350 4844 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184355 4844 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184358 4844 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184362 4844 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184367 4844 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184371 4844 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184375 4844 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184379 4844 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184383 4844 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184388 4844 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184396 4844 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184399 4844 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184405 4844 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184410 4844 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184414 4844 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184418 4844 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184422 4844 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184425 4844 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184428 4844 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184432 4844 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184438 4844 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184442 4844 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184445 4844 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184449 4844 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184452 4844 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184456 4844 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184461 4844 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184466 4844 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184470 4844 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184473 4844 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184478 4844 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184482 4844 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184485 4844 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184489 4844 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184492 4844 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184496 4844 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184499 4844 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184503 4844 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184507 4844 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184510 4844 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184514 4844 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.184517 4844 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.184530 4844 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.193017 4844 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.193043 4844 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193131 4844 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193140 4844 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193147 4844 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193154 4844 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193160 4844 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193166 4844 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193174 4844 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193180 4844 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193187 4844 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193192 4844 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193197 4844 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193202 4844 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193208 4844 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193213 4844 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193218 4844 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193223 4844 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193229 4844 feature_gate.go:330] unrecognized feature gate: Example Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193234 4844 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193241 4844 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193249 4844 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193255 4844 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193261 4844 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193266 4844 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193271 4844 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193277 4844 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193282 4844 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193287 4844 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193292 4844 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193297 4844 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193303 4844 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193308 4844 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193313 4844 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193328 4844 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193333 4844 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193338 4844 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193343 4844 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193350 4844 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193357 4844 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193363 4844 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193368 4844 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193374 4844 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193379 4844 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193384 4844 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193389 4844 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193394 4844 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193400 4844 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193405 4844 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193410 4844 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193415 4844 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193422 4844 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193429 4844 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193435 4844 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193441 4844 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193447 4844 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193452 4844 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193457 4844 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193462 4844 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193468 4844 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193473 4844 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193478 4844 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193483 4844 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193489 4844 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193494 4844 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193499 4844 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193505 4844 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193510 4844 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193515 4844 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193520 4844 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193525 4844 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193531 4844 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193536 4844 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.193545 4844 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193744 4844 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193755 4844 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193761 4844 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193767 4844 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193773 4844 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193779 4844 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193785 4844 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193791 4844 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193797 4844 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193803 4844 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193809 4844 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193814 4844 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193819 4844 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193824 4844 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193830 4844 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193835 4844 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193840 4844 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193847 4844 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193853 4844 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193860 4844 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193906 4844 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193919 4844 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193928 4844 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193935 4844 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193941 4844 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193947 4844 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193952 4844 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193957 4844 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193962 4844 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193967 4844 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193973 4844 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193978 4844 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193983 4844 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193988 4844 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.193994 4844 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194001 4844 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194007 4844 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194013 4844 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194020 4844 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194025 4844 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194032 4844 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194038 4844 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194044 4844 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194050 4844 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194055 4844 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194061 4844 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194066 4844 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194072 4844 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194078 4844 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194086 4844 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194092 4844 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194099 4844 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194105 4844 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194110 4844 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194115 4844 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194120 4844 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194126 4844 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194131 4844 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194136 4844 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194141 4844 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194146 4844 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194151 4844 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194156 4844 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194161 4844 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194166 4844 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194172 4844 feature_gate.go:330] unrecognized feature gate: Example Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194177 4844 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194182 4844 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194187 4844 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194192 4844 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.194197 4844 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.194205 4844 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.194374 4844 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.198043 4844 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.198143 4844 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.198790 4844 server.go:997] "Starting client certificate rotation" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.198820 4844 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.198999 4844 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-29 19:11:55.512139976 +0000 UTC Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.199096 4844 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.204128 4844 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.205685 4844 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 12:43:43 crc kubenswrapper[4844]: E0126 12:43:43.206221 4844 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.212121 4844 log.go:25] "Validated CRI v1 runtime API" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.224786 4844 log.go:25] "Validated CRI v1 image API" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.226294 4844 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.229207 4844 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-12-36-34-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.229259 4844 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:40 fsType:tmpfs blockSize:0}] Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.246535 4844 manager.go:217] Machine: {Timestamp:2026-01-26 12:43:43.245007229 +0000 UTC m=+0.178374871 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:4eb778d6-9226-440d-bd27-0b6f19659b0d BootID:8ec9310b-463d-4f1d-a480-c21f33e8b459 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:40 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:99:46:f6 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:99:46:f6 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:17:76:1c Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e2:ca:1b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:f9:01:73 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:6e:8d:e3 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0a:74:70:42:1f:12 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:76:9b:27:09:87:b5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.246773 4844 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.246927 4844 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.247342 4844 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.247575 4844 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.247637 4844 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.247886 4844 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.247900 4844 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.248184 4844 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.248236 4844 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.248662 4844 state_mem.go:36] "Initialized new in-memory state store" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.248775 4844 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.249824 4844 kubelet.go:418] "Attempting to sync node with API server" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.249853 4844 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.249882 4844 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.249898 4844 kubelet.go:324] "Adding apiserver pod source" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.249910 4844 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.251810 4844 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.251936 4844 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Jan 26 12:43:43 crc kubenswrapper[4844]: E0126 12:43:43.252019 4844 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.252029 4844 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Jan 26 12:43:43 crc kubenswrapper[4844]: E0126 12:43:43.252123 4844 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.252433 4844 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.253493 4844 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.254378 4844 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.254417 4844 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.254430 4844 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.254440 4844 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.254458 4844 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.254475 4844 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.254484 4844 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.254498 4844 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.254509 4844 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.254519 4844 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.254532 4844 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.254542 4844 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.254773 4844 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.255360 4844 server.go:1280] "Started kubelet" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.255750 4844 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.256111 4844 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.256109 4844 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.257147 4844 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 12:43:43 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.258625 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.258667 4844 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 12:43:43 crc kubenswrapper[4844]: E0126 12:43:43.258185 4844 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.142:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e48812d8c33b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 12:43:43.255327669 +0000 UTC m=+0.188695301,LastTimestamp:2026-01-26 12:43:43.255327669 +0000 UTC m=+0.188695301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.259133 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:06:55.364601429 +0000 UTC Jan 26 12:43:43 crc kubenswrapper[4844]: E0126 12:43:43.260075 4844 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.260513 4844 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.260561 4844 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.260721 4844 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 12:43:43 crc kubenswrapper[4844]: E0126 12:43:43.269481 4844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="200ms" Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.269478 4844 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Jan 26 12:43:43 crc kubenswrapper[4844]: E0126 12:43:43.269902 4844 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.269771 4844 factory.go:55] Registering systemd factory Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.270096 4844 factory.go:221] Registration of the systemd container factory successfully Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.270250 4844 server.go:460] "Adding debug handlers to kubelet server" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.271282 4844 factory.go:153] Registering CRI-O factory Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.271362 4844 factory.go:221] Registration of the crio container factory successfully Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.271489 4844 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.271572 4844 factory.go:103] Registering Raw factory Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.271699 4844 manager.go:1196] Started watching for new ooms in manager Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.273244 4844 manager.go:319] Starting recovery of all containers Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279094 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279169 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279194 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279212 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279230 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279247 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279265 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279284 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279305 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279322 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279342 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279394 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279415 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279436 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279451 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279470 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279485 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279501 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279518 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279534 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279550 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279565 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279582 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279623 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279643 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279659 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279683 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279703 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279721 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279739 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279758 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279778 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279833 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279852 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279870 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279910 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279931 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279950 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279967 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.279983 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280001 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280020 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280038 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280056 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280073 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280090 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280107 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280135 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280152 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280170 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280190 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280212 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280238 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280259 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280277 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280297 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280317 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280334 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280350 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280368 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280387 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280405 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280424 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280445 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280462 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280480 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280497 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280514 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280533 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280551 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280568 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280586 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280636 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280654 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280909 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280927 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280940 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280956 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280968 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280983 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.280997 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281010 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281023 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281036 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281048 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281060 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281072 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281127 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281142 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281155 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281169 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281183 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281198 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281212 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281225 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281240 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281253 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281266 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281280 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281293 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281307 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281320 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281332 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281346 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281367 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281381 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281395 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281409 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281422 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281438 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281453 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281469 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281483 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281497 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281511 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281524 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281539 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281552 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281567 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281581 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281633 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281654 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281671 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281687 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281704 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281717 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281729 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281745 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281758 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281771 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281784 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281797 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281811 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281822 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281835 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281847 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281859 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281871 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281885 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281897 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281910 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281923 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281935 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281950 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.281989 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282006 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282019 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282033 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282047 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282061 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282075 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282088 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282101 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282115 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282129 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282142 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282157 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282172 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282185 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282199 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282252 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282266 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282283 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282298 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282310 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282326 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282341 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282355 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282368 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282381 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282394 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282406 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282421 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282436 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.282449 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283401 4844 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283432 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283447 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283463 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283478 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283492 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283542 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283559 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283572 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283585 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283618 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283638 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283654 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283667 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283680 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283692 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283707 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283719 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283731 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283744 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283759 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283773 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283787 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283799 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283813 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283828 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283841 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283854 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283867 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283881 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283894 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283906 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283920 4844 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283932 4844 reconstruct.go:97] "Volume reconstruction finished" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.283942 4844 reconciler.go:26] "Reconciler: start to sync state" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.293142 4844 manager.go:324] Recovery completed Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.306915 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.309576 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.309624 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.309635 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.310241 4844 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.310393 4844 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.310411 4844 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.310434 4844 state_mem.go:36] "Initialized new in-memory state store" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.311822 4844 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.311867 4844 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.311901 4844 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 12:43:43 crc kubenswrapper[4844]: E0126 12:43:43.311945 4844 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.318674 4844 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Jan 26 12:43:43 crc kubenswrapper[4844]: E0126 12:43:43.318746 4844 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.320104 4844 policy_none.go:49] "None policy: Start" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.320958 4844 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.320989 4844 state_mem.go:35] "Initializing new in-memory state store" Jan 26 12:43:43 crc kubenswrapper[4844]: E0126 12:43:43.361139 4844 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.364354 4844 manager.go:334] "Starting Device Plugin manager" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.364395 4844 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.364409 4844 server.go:79] "Starting device plugin registration server" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.365977 4844 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.366223 4844 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.366512 4844 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.366687 4844 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.366706 4844 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 12:43:43 crc kubenswrapper[4844]: E0126 12:43:43.375787 4844 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.412657 4844 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.412797 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.414311 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.414358 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.414367 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.414519 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.414915 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.414968 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.415404 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.415439 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.415451 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.415552 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.415757 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.415797 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.416267 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.416299 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.416310 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.416446 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.416467 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.416477 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.416689 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.416705 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.416716 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.416729 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.416785 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.416814 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.417407 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.417438 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.417448 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.417559 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.417715 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.417772 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.417779 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.417813 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.417788 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.418198 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.418223 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.418233 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.418362 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.418383 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.418722 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.418744 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.418753 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.419075 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.419095 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.419106 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.466644 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.467921 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.467954 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.467965 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.467988 4844 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 12:43:43 crc kubenswrapper[4844]: E0126 12:43:43.468423 4844 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.142:6443: connect: connection refused" node="crc" Jan 26 12:43:43 crc kubenswrapper[4844]: E0126 12:43:43.470832 4844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="400ms" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.486952 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.487011 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.487049 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.487112 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.487145 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.487179 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.487223 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.487286 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.487328 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.487369 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.487406 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.487435 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.487465 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.487493 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.487523 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588430 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588482 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588506 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588521 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588541 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588558 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588573 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588586 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588635 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588657 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588680 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588704 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588724 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588745 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588738 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588790 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588801 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588926 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588989 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.588764 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.589114 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.589140 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.589254 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.589264 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.589309 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.589190 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.589308 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.589375 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.589333 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.589422 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.669011 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.670341 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.670468 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.670488 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.670536 4844 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 12:43:43 crc kubenswrapper[4844]: E0126 12:43:43.671418 4844 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.142:6443: connect: connection refused" node="crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.751966 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.761589 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.774800 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7abf2fff8093dc27f11bd6e30dddb744bc81a7a6304e19930634517e38cfbaed WatchSource:0}: Error finding container 7abf2fff8093dc27f11bd6e30dddb744bc81a7a6304e19930634517e38cfbaed: Status 404 returned error can't find the container with id 7abf2fff8093dc27f11bd6e30dddb744bc81a7a6304e19930634517e38cfbaed Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.776881 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.777288 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-fd9be5279c78f385d97f4f1baf6b8efbd142640928de5c11475ddde896d13944 WatchSource:0}: Error finding container fd9be5279c78f385d97f4f1baf6b8efbd142640928de5c11475ddde896d13944: Status 404 returned error can't find the container with id fd9be5279c78f385d97f4f1baf6b8efbd142640928de5c11475ddde896d13944 Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.792322 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-827b067ebf9ed309d365011e314036be243404328cf1c70f8d2d8c8097ad54fa WatchSource:0}: Error finding container 827b067ebf9ed309d365011e314036be243404328cf1c70f8d2d8c8097ad54fa: Status 404 returned error can't find the container with id 827b067ebf9ed309d365011e314036be243404328cf1c70f8d2d8c8097ad54fa Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.792971 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: I0126 12:43:43.798688 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.805507 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-a5ad9f91e9d9ca04e68a908a047f43e973a79e6e9593aba18100d7c72eca8580 WatchSource:0}: Error finding container a5ad9f91e9d9ca04e68a908a047f43e973a79e6e9593aba18100d7c72eca8580: Status 404 returned error can't find the container with id a5ad9f91e9d9ca04e68a908a047f43e973a79e6e9593aba18100d7c72eca8580 Jan 26 12:43:43 crc kubenswrapper[4844]: W0126 12:43:43.821200 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-5def507d5212f626d73271f08fa7c4b5bd653bd919ac202f55432902d67968ba WatchSource:0}: Error finding container 5def507d5212f626d73271f08fa7c4b5bd653bd919ac202f55432902d67968ba: Status 404 returned error can't find the container with id 5def507d5212f626d73271f08fa7c4b5bd653bd919ac202f55432902d67968ba Jan 26 12:43:43 crc kubenswrapper[4844]: E0126 12:43:43.872586 4844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="800ms" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.072009 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.073236 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.073265 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.073276 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.073302 4844 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 12:43:44 crc kubenswrapper[4844]: E0126 12:43:44.073843 4844 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.142:6443: connect: connection refused" node="crc" Jan 26 12:43:44 crc kubenswrapper[4844]: W0126 12:43:44.144688 4844 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Jan 26 12:43:44 crc kubenswrapper[4844]: E0126 12:43:44.144997 4844 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.257554 4844 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.259702 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 13:08:13.904902741 +0000 UTC Jan 26 12:43:44 crc kubenswrapper[4844]: W0126 12:43:44.267613 4844 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Jan 26 12:43:44 crc kubenswrapper[4844]: E0126 12:43:44.267688 4844 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.321744 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a"} Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.321845 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a5ad9f91e9d9ca04e68a908a047f43e973a79e6e9593aba18100d7c72eca8580"} Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.323698 4844 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81" exitCode=0 Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.323749 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81"} Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.323764 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"827b067ebf9ed309d365011e314036be243404328cf1c70f8d2d8c8097ad54fa"} Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.323859 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.326933 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.327010 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.327027 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.329700 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.330034 4844 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4" exitCode=0 Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.330106 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4"} Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.330130 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fd9be5279c78f385d97f4f1baf6b8efbd142640928de5c11475ddde896d13944"} Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.330229 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.330842 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.330871 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.330883 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.331133 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.331182 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.331195 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.332084 4844 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="79c1bc1eecc04502fc5b42134d0ce860de5998e1ea84234bc1720b18c9507786" exitCode=0 Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.332140 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"79c1bc1eecc04502fc5b42134d0ce860de5998e1ea84234bc1720b18c9507786"} Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.332183 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7abf2fff8093dc27f11bd6e30dddb744bc81a7a6304e19930634517e38cfbaed"} Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.332264 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.333346 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.333370 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.333382 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.334444 4844 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5" exitCode=0 Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.334473 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5"} Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.334492 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5def507d5212f626d73271f08fa7c4b5bd653bd919ac202f55432902d67968ba"} Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.334566 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.335387 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.335427 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.335441 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:44 crc kubenswrapper[4844]: E0126 12:43:44.673620 4844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="1.6s" Jan 26 12:43:44 crc kubenswrapper[4844]: W0126 12:43:44.759357 4844 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Jan 26 12:43:44 crc kubenswrapper[4844]: E0126 12:43:44.759434 4844 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Jan 26 12:43:44 crc kubenswrapper[4844]: W0126 12:43:44.770194 4844 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.142:6443: connect: connection refused Jan 26 12:43:44 crc kubenswrapper[4844]: E0126 12:43:44.770288 4844 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.142:6443: connect: connection refused" logger="UnhandledError" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.874839 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.878179 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.878239 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.878252 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:44 crc kubenswrapper[4844]: I0126 12:43:44.878308 4844 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 12:43:44 crc kubenswrapper[4844]: E0126 12:43:44.878896 4844 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.142:6443: connect: connection refused" node="crc" Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.260013 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:52:24.031490839 +0000 UTC Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.295403 4844 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.340230 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136"} Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.340308 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b"} Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.340323 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e"} Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.342153 4844 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd" exitCode=0 Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.342237 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd"} Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.342389 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.343238 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.343272 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.343287 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.343736 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8b15b21f6c49117b7ab33013296dbf71ea8dd0556818a8a4da0a48fcdcbf9094"} Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.343866 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.344847 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.344870 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.344880 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.346193 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"11638189cf49baf0798a3c7a229b67e05eedf2292d79f884a990a091f21a61c0"} Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.346236 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c0a80313396de8bb91760bdf2477da9d233e2387d1ac6addcce62acc4578772c"} Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.349869 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3"} Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.349901 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee"} Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.349913 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36"} Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.350196 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.351180 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.351219 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:45 crc kubenswrapper[4844]: I0126 12:43:45.351229 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.260540 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 19:41:45.60101184 +0000 UTC Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.356142 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fb28b05d43134d8c4f89d83cd620973c937fd16347910ebf056026f0a3708a92"} Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.356180 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.357651 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.357681 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.357693 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.359922 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2"} Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.359948 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7"} Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.359961 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.360627 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.360651 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.360660 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.362401 4844 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea" exitCode=0 Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.362460 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.362474 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea"} Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.362572 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.363170 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.363198 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.363209 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.363432 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.363463 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.363474 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.417752 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.479466 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.480540 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.480581 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.480592 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:46 crc kubenswrapper[4844]: I0126 12:43:46.480644 4844 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.260737 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 22:43:18.671933373 +0000 UTC Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.349392 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.370886 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2"} Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.370956 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e"} Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.370972 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26"} Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.370994 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254"} Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.371008 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca"} Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.371035 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.371078 4844 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.371209 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.371127 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.371199 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.371121 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.372992 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.373037 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.373051 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.373050 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.373149 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.373171 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.373180 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.373180 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.373298 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.374180 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.374217 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.374228 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:47 crc kubenswrapper[4844]: I0126 12:43:47.775979 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:43:48 crc kubenswrapper[4844]: I0126 12:43:48.149748 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 12:43:48 crc kubenswrapper[4844]: I0126 12:43:48.261679 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 23:05:54.137091841 +0000 UTC Jan 26 12:43:48 crc kubenswrapper[4844]: I0126 12:43:48.374964 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:48 crc kubenswrapper[4844]: I0126 12:43:48.375011 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:48 crc kubenswrapper[4844]: I0126 12:43:48.374964 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:48 crc kubenswrapper[4844]: I0126 12:43:48.376395 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:48 crc kubenswrapper[4844]: I0126 12:43:48.376443 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:48 crc kubenswrapper[4844]: I0126 12:43:48.376458 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:48 crc kubenswrapper[4844]: I0126 12:43:48.376400 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:48 crc kubenswrapper[4844]: I0126 12:43:48.376625 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:48 crc kubenswrapper[4844]: I0126 12:43:48.376649 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:48 crc kubenswrapper[4844]: I0126 12:43:48.377028 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:48 crc kubenswrapper[4844]: I0126 12:43:48.377083 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:48 crc kubenswrapper[4844]: I0126 12:43:48.377107 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:48 crc kubenswrapper[4844]: I0126 12:43:48.385895 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:43:49 crc kubenswrapper[4844]: I0126 12:43:49.262505 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:19:08.7813918 +0000 UTC Jan 26 12:43:49 crc kubenswrapper[4844]: I0126 12:43:49.377635 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:49 crc kubenswrapper[4844]: I0126 12:43:49.378880 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:49 crc kubenswrapper[4844]: I0126 12:43:49.378921 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:49 crc kubenswrapper[4844]: I0126 12:43:49.378931 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:49 crc kubenswrapper[4844]: I0126 12:43:49.481041 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:43:49 crc kubenswrapper[4844]: I0126 12:43:49.481546 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:49 crc kubenswrapper[4844]: I0126 12:43:49.483364 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:49 crc kubenswrapper[4844]: I0126 12:43:49.483397 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:49 crc kubenswrapper[4844]: I0126 12:43:49.483406 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:50 crc kubenswrapper[4844]: I0126 12:43:50.263675 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 03:54:44.022061372 +0000 UTC Jan 26 12:43:50 crc kubenswrapper[4844]: I0126 12:43:50.599838 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:43:50 crc kubenswrapper[4844]: I0126 12:43:50.600176 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:50 crc kubenswrapper[4844]: I0126 12:43:50.603198 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:50 crc kubenswrapper[4844]: I0126 12:43:50.603282 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:50 crc kubenswrapper[4844]: I0126 12:43:50.603308 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:50 crc kubenswrapper[4844]: I0126 12:43:50.609470 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:43:51 crc kubenswrapper[4844]: I0126 12:43:51.007074 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:43:51 crc kubenswrapper[4844]: I0126 12:43:51.007410 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:51 crc kubenswrapper[4844]: I0126 12:43:51.009741 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:51 crc kubenswrapper[4844]: I0126 12:43:51.009828 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:51 crc kubenswrapper[4844]: I0126 12:43:51.009849 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:51 crc kubenswrapper[4844]: I0126 12:43:51.265792 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:12:39.434136203 +0000 UTC Jan 26 12:43:51 crc kubenswrapper[4844]: I0126 12:43:51.304071 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 12:43:51 crc kubenswrapper[4844]: I0126 12:43:51.304302 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:51 crc kubenswrapper[4844]: I0126 12:43:51.306220 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:51 crc kubenswrapper[4844]: I0126 12:43:51.306290 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:51 crc kubenswrapper[4844]: I0126 12:43:51.306310 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:51 crc kubenswrapper[4844]: I0126 12:43:51.385981 4844 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 12:43:51 crc kubenswrapper[4844]: I0126 12:43:51.386164 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 12:43:51 crc kubenswrapper[4844]: I0126 12:43:51.389125 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:51 crc kubenswrapper[4844]: I0126 12:43:51.390492 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:51 crc kubenswrapper[4844]: I0126 12:43:51.390550 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:51 crc kubenswrapper[4844]: I0126 12:43:51.390562 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:52 crc kubenswrapper[4844]: I0126 12:43:52.266504 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:17:02.482268549 +0000 UTC Jan 26 12:43:53 crc kubenswrapper[4844]: I0126 12:43:53.266646 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 03:50:15.173735368 +0000 UTC Jan 26 12:43:53 crc kubenswrapper[4844]: E0126 12:43:53.376562 4844 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 12:43:54 crc kubenswrapper[4844]: I0126 12:43:54.267724 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 06:32:07.321240966 +0000 UTC Jan 26 12:43:55 crc kubenswrapper[4844]: I0126 12:43:55.258882 4844 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 26 12:43:55 crc kubenswrapper[4844]: I0126 12:43:55.268353 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:33:40.619999609 +0000 UTC Jan 26 12:43:55 crc kubenswrapper[4844]: E0126 12:43:55.297912 4844 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 12:43:56 crc kubenswrapper[4844]: W0126 12:43:56.046742 4844 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 12:43:56 crc kubenswrapper[4844]: I0126 12:43:56.047227 4844 trace.go:236] Trace[1368923906]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 12:43:46.044) (total time: 10002ms): Jan 26 12:43:56 crc kubenswrapper[4844]: Trace[1368923906]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (12:43:56.046) Jan 26 12:43:56 crc kubenswrapper[4844]: Trace[1368923906]: [10.00275231s] [10.00275231s] END Jan 26 12:43:56 crc kubenswrapper[4844]: E0126 12:43:56.047475 4844 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 12:43:56 crc kubenswrapper[4844]: I0126 12:43:56.220674 4844 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 12:43:56 crc kubenswrapper[4844]: I0126 12:43:56.220779 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 12:43:56 crc kubenswrapper[4844]: I0126 12:43:56.228261 4844 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 12:43:56 crc kubenswrapper[4844]: I0126 12:43:56.228316 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 12:43:56 crc kubenswrapper[4844]: I0126 12:43:56.269334 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 21:45:26.990961241 +0000 UTC Jan 26 12:43:56 crc kubenswrapper[4844]: I0126 12:43:56.423589 4844 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]log ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]etcd ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/priority-and-fairness-filter ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/start-apiextensions-informers ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/start-apiextensions-controllers ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/crd-informer-synced ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/start-system-namespaces-controller ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 26 12:43:56 crc kubenswrapper[4844]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 26 12:43:56 crc kubenswrapper[4844]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/bootstrap-controller ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/start-kube-aggregator-informers ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/apiservice-registration-controller ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/apiservice-discovery-controller ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]autoregister-completion ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/apiservice-openapi-controller ok Jan 26 12:43:56 crc kubenswrapper[4844]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 26 12:43:56 crc kubenswrapper[4844]: livez check failed Jan 26 12:43:56 crc kubenswrapper[4844]: I0126 12:43:56.423748 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:43:57 crc kubenswrapper[4844]: I0126 12:43:57.270119 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 03:12:14.835059636 +0000 UTC Jan 26 12:43:58 crc kubenswrapper[4844]: I0126 12:43:58.198023 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:43:58 crc kubenswrapper[4844]: I0126 12:43:58.198691 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:43:58 crc kubenswrapper[4844]: I0126 12:43:58.200358 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:43:58 crc kubenswrapper[4844]: I0126 12:43:58.200415 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:43:58 crc kubenswrapper[4844]: I0126 12:43:58.200427 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:43:58 crc kubenswrapper[4844]: I0126 12:43:58.270287 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:23:03.928703693 +0000 UTC Jan 26 12:43:59 crc kubenswrapper[4844]: I0126 12:43:59.271382 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:57:20.055224949 +0000 UTC Jan 26 12:43:59 crc kubenswrapper[4844]: I0126 12:43:59.669448 4844 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 12:43:59 crc kubenswrapper[4844]: I0126 12:43:59.687509 4844 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 12:44:00 crc kubenswrapper[4844]: I0126 12:44:00.271766 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 09:32:34.886672929 +0000 UTC Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.228050 4844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.232591 4844 trace.go:236] Trace[1273170685]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 12:43:46.643) (total time: 14589ms): Jan 26 12:44:01 crc kubenswrapper[4844]: Trace[1273170685]: ---"Objects listed" error: 14589ms (12:44:01.232) Jan 26 12:44:01 crc kubenswrapper[4844]: Trace[1273170685]: [14.589105458s] [14.589105458s] END Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.232677 4844 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.234098 4844 trace.go:236] Trace[1936916496]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 12:43:47.773) (total time: 13460ms): Jan 26 12:44:01 crc kubenswrapper[4844]: Trace[1936916496]: ---"Objects listed" error: 13460ms (12:44:01.233) Jan 26 12:44:01 crc kubenswrapper[4844]: Trace[1936916496]: [13.460945956s] [13.460945956s] END Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.234139 4844 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.234284 4844 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.238152 4844 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.239902 4844 trace.go:236] Trace[1763139549]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 12:43:47.477) (total time: 13762ms): Jan 26 12:44:01 crc kubenswrapper[4844]: Trace[1763139549]: ---"Objects listed" error: 13762ms (12:44:01.239) Jan 26 12:44:01 crc kubenswrapper[4844]: Trace[1763139549]: [13.762725569s] [13.762725569s] END Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.239949 4844 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.261900 4844 apiserver.go:52] "Watching apiserver" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.265712 4844 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.266025 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.266445 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.266478 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.266567 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.266967 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.267079 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.267189 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.267206 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.267228 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.275029 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.275051 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 02:27:18.68701822 +0000 UTC Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.275951 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.276328 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.276873 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.276880 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.276909 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.277132 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.280690 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.280952 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.280992 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.309385 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.327770 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.329550 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.340471 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.346438 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.346976 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.359270 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.361930 4844 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.374936 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.385511 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.386838 4844 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.386879 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.389951 4844 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.399436 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.409644 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.419084 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.422046 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.426308 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.429304 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.434904 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.434938 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.434956 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.434974 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.434998 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435015 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435029 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435047 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435064 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435079 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435114 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435131 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435149 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435165 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435180 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435195 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435209 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435241 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435258 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435285 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435283 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435301 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435317 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435332 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435347 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435362 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435378 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435395 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435410 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435442 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435461 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435475 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435492 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435506 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435520 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435537 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435555 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435608 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435627 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435666 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435683 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435698 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435713 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435730 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435748 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435765 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435782 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435798 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.435815 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.436149 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.436187 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.436208 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.436230 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.436252 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.436269 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.436290 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.436310 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.436317 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.436331 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.436351 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.436373 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.436463 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.437140 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.437159 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.437375 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.437584 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.437746 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.437858 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.437892 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.438016 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.438081 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.438287 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.438339 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.438527 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.438612 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.438623 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.438921 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.438979 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.438992 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.439025 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.439267 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.439251 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.439258 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.439467 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.439505 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.436854 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.439777 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.439809 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.439878 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440060 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440383 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.439019 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440482 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440505 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440533 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440558 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440585 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440622 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440644 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440664 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440688 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440713 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440734 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440751 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440771 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440792 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440816 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440834 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440864 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440897 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440916 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440949 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.440978 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441001 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441019 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441040 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441067 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441085 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441111 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441135 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441154 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441175 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441197 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441222 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441243 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441264 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441284 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441306 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441327 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441349 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441370 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441387 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441424 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441447 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441470 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441495 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441520 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441540 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441561 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441581 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441609 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.437684 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441620 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.442408 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.442434 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.441740 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.442976 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.442796 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.442979 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.443013 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.443335 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.443370 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.443403 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.443451 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.443502 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.443545 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.443585 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.443626 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.443867 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.443902 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.443930 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.443955 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.443985 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.444013 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.444040 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.444068 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.444098 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.444129 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.444159 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.444195 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.444224 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.444253 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.444283 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.444323 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.444356 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.444488 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.444525 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.444728 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.445251 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.445393 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.445396 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.445441 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.445959 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446009 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446051 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446081 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446124 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446160 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446183 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446212 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446237 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446259 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446281 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446310 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446338 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446366 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446401 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446437 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446464 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446492 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446527 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446553 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446574 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446626 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446654 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446682 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446717 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446750 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446784 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446814 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446929 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.446975 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.447017 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.447053 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:44:01.947018776 +0000 UTC m=+18.880386388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.447045 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.447078 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.447125 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.447460 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.447484 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.447489 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.447493 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.447568 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.447829 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.447338 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.448363 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.449661 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.449996 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.450013 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.450043 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.450291 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.450429 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.450467 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.450527 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.450924 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.451299 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.451053 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.450903 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.451222 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.451434 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.451496 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.451529 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.451775 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.451162 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.451870 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.452071 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.452245 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.452203 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.452949 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.453247 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.453509 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.453739 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.453827 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.453915 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.453995 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.454047 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.454096 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.454096 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.454335 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.454068 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.454556 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.454590 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.454669 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.454697 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.454723 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.454750 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.454778 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.454806 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.454998 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455011 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455053 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455094 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455130 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455159 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455184 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455274 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455307 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455309 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455361 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455395 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455431 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455463 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455497 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455526 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455638 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455698 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455509 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455725 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455772 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455759 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455789 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455834 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455894 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455932 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455954 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455985 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456006 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456026 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456048 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456066 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456086 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456105 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456127 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456166 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456186 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456233 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456327 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456343 4844 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456357 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456370 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456387 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456405 4844 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456416 4844 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456426 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456436 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456446 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456458 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456471 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456482 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456494 4844 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456506 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456515 4844 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456524 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456535 4844 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456545 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456555 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456567 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456577 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456587 4844 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456612 4844 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456623 4844 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456632 4844 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456643 4844 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456655 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456666 4844 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456676 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456686 4844 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456695 4844 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456704 4844 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456713 4844 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456722 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456732 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456742 4844 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456752 4844 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456763 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456774 4844 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456787 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456800 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456813 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456825 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456838 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456850 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456866 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456939 4844 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456951 4844 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456966 4844 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456979 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456994 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457006 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457018 4844 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457027 4844 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457037 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457048 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457058 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457068 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457077 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457086 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457097 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457107 4844 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457116 4844 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457125 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457134 4844 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457143 4844 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457154 4844 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457164 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457173 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457184 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457199 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457219 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457232 4844 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457244 4844 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457256 4844 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457268 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457280 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457291 4844 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457303 4844 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457315 4844 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457327 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457341 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457355 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457367 4844 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457379 4844 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457392 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457404 4844 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457452 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457465 4844 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457478 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457494 4844 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457504 4844 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457516 4844 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457528 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.455893 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.458553 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456088 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.458654 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.458665 4844 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.458708 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.458709 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.458838 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:01.958813821 +0000 UTC m=+18.892181653 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456214 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456303 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456652 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456670 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456755 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456841 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456904 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456965 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457199 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457227 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457386 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457401 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457426 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457457 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457508 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457742 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457778 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.459027 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.457918 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.458124 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.458123 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.458184 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.458235 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.458514 4844 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.459071 4844 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.459163 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:01.959148739 +0000 UTC m=+18.892516351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.459160 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.456116 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.458658 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.458542 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.459389 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.459467 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.459454 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.459507 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.459702 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.459737 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.459616 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.459972 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.459999 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.460175 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.460255 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.460388 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.460513 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.460556 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.461056 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.461385 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.461402 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.461516 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.461897 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.462250 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.462385 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.462763 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.462952 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.463099 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.463362 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.463439 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.463492 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.463714 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.464813 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.466208 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.466284 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.466294 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.466708 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.466904 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.467094 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.467577 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.467853 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.468146 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.468471 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.468773 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.468855 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.469228 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.472296 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.474717 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.475481 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.475736 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.476877 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.477519 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.477707 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.478049 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.478081 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.478099 4844 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.478183 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:01.978161137 +0000 UTC m=+18.911528989 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.481133 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.481495 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.483870 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.483693 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.484476 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.486103 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.490657 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.490687 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.490704 4844 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.490774 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:01.99075176 +0000 UTC m=+18.924119372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.491184 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.495157 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.496900 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.499458 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.500785 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.504232 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.504257 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.505028 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.505785 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.505991 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.506038 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.506242 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.506293 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.506506 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.510107 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.513432 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.513993 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.516149 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.517667 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.521646 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.526119 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.529844 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.540030 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.548872 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.558325 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.560851 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.560909 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.560944 4844 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.560955 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.560964 4844 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.560973 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.560982 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.560990 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.560999 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561007 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561015 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561023 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561032 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561041 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561051 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561054 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561060 4844 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561109 4844 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561124 4844 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561106 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561137 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561200 4844 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561213 4844 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561225 4844 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561238 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561248 4844 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561259 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561272 4844 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561284 4844 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561303 4844 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561315 4844 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561328 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561341 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561354 4844 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561366 4844 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561377 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561389 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561400 4844 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561411 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561425 4844 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561436 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561448 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561459 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561478 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561490 4844 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561501 4844 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561513 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561525 4844 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561536 4844 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561552 4844 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561563 4844 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561575 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561588 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561618 4844 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561630 4844 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561643 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561654 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561665 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561677 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561688 4844 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561700 4844 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561711 4844 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561722 4844 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561733 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561745 4844 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561757 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561768 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561779 4844 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561790 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561803 4844 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561815 4844 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561827 4844 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561837 4844 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561849 4844 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561860 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561871 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561882 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561894 4844 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561905 4844 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561915 4844 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561927 4844 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561938 4844 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561949 4844 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561963 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561974 4844 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561987 4844 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.561999 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.562012 4844 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.562026 4844 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.562037 4844 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.562050 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.562062 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.562074 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.562086 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.562100 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.562111 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.562123 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.562134 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.562145 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.562157 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.562170 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.562183 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.562196 4844 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.596798 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.605238 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.610221 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.616164 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.624871 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: W0126 12:44:01.631187 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-a177a57c143b77eec5a27038f05fb4fe165a4526b2fbde6bf79c465d8f27e86b WatchSource:0}: Error finding container a177a57c143b77eec5a27038f05fb4fe165a4526b2fbde6bf79c465d8f27e86b: Status 404 returned error can't find the container with id a177a57c143b77eec5a27038f05fb4fe165a4526b2fbde6bf79c465d8f27e86b Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.644936 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.648176 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.668994 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.680862 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.691443 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.706177 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.719004 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.738881 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.754134 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.765683 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.782884 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.798056 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.812734 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.822616 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.833176 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.965909 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.966027 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:44:02.966008709 +0000 UTC m=+19.899376321 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.966402 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.966452 4844 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.966497 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:02.966488861 +0000 UTC m=+19.899856473 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.966518 4844 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:44:01 crc kubenswrapper[4844]: I0126 12:44:01.966460 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:01 crc kubenswrapper[4844]: E0126 12:44:01.966567 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:02.966559032 +0000 UTC m=+19.899926644 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.067869 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.067931 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:02 crc kubenswrapper[4844]: E0126 12:44:02.068084 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:44:02 crc kubenswrapper[4844]: E0126 12:44:02.068099 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:44:02 crc kubenswrapper[4844]: E0126 12:44:02.068110 4844 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:02 crc kubenswrapper[4844]: E0126 12:44:02.068138 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:44:02 crc kubenswrapper[4844]: E0126 12:44:02.068181 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:44:02 crc kubenswrapper[4844]: E0126 12:44:02.068196 4844 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:02 crc kubenswrapper[4844]: E0126 12:44:02.068161 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:03.068146742 +0000 UTC m=+20.001514344 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:02 crc kubenswrapper[4844]: E0126 12:44:02.068285 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:03.068255315 +0000 UTC m=+20.001622927 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.276284 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:51:20.439369081 +0000 UTC Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.418256 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ad3b28c132298f09e0d10d0ae3e951f2899ad81c7a4ad33f47b88dfa662886a7"} Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.420665 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709"} Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.420703 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7"} Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.420718 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1d2f3341a888dd70e4d34b0f0117b7f2ccff1cb15dcb6d2f2bbd1be875d10ebb"} Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.421929 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0"} Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.421985 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a177a57c143b77eec5a27038f05fb4fe165a4526b2fbde6bf79c465d8f27e86b"} Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.439147 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.459819 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.471418 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.487287 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.503642 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.519553 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.547928 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.566481 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.583930 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.599637 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.620826 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.632782 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.656416 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.671621 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.687731 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.713484 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:02Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.974510 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.974642 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:02 crc kubenswrapper[4844]: I0126 12:44:02.974679 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:02 crc kubenswrapper[4844]: E0126 12:44:02.974828 4844 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:44:02 crc kubenswrapper[4844]: E0126 12:44:02.974853 4844 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:44:02 crc kubenswrapper[4844]: E0126 12:44:02.974910 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:44:04.974860463 +0000 UTC m=+21.908228115 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:44:02 crc kubenswrapper[4844]: E0126 12:44:02.974973 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:04.974953205 +0000 UTC m=+21.908320927 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:44:02 crc kubenswrapper[4844]: E0126 12:44:02.975006 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:04.974987066 +0000 UTC m=+21.908354778 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.075791 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.075858 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:03 crc kubenswrapper[4844]: E0126 12:44:03.076037 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:44:03 crc kubenswrapper[4844]: E0126 12:44:03.076066 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:44:03 crc kubenswrapper[4844]: E0126 12:44:03.076078 4844 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:03 crc kubenswrapper[4844]: E0126 12:44:03.076123 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:44:03 crc kubenswrapper[4844]: E0126 12:44:03.076141 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:05.076122825 +0000 UTC m=+22.009490437 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:03 crc kubenswrapper[4844]: E0126 12:44:03.076150 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:44:03 crc kubenswrapper[4844]: E0126 12:44:03.076169 4844 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:03 crc kubenswrapper[4844]: E0126 12:44:03.076231 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:05.076210287 +0000 UTC m=+22.009577979 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.276874 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:55:35.656529295 +0000 UTC Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.312404 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.312491 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:03 crc kubenswrapper[4844]: E0126 12:44:03.312559 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.312589 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:03 crc kubenswrapper[4844]: E0126 12:44:03.312693 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:03 crc kubenswrapper[4844]: E0126 12:44:03.312745 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.318457 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.319632 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.321447 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.322480 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.323135 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.323846 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.325615 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.326327 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.327480 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.328085 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.328713 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.328979 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.329817 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.330288 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.331179 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.331714 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.332582 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.333155 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.333530 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.334469 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.335028 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.335448 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.336557 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.337161 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.338412 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.339068 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.340471 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.341317 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.342264 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.342876 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.343388 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.344258 4844 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.344381 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.345499 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.346673 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.347337 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.347896 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.349214 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.350041 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.350725 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.351488 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.352280 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.352911 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.353643 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.354430 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.355212 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.356882 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.357623 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.358790 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.359221 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.359748 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.360774 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.361473 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.362084 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.363210 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.363923 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.364986 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.370739 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.394698 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.408655 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.422558 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:03 crc kubenswrapper[4844]: I0126 12:44:03.433387 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.277417 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 18:59:01.95508402 +0000 UTC Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.427947 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6"} Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.438972 4844 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.440719 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.440770 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.440791 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.440861 4844 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.456289 4844 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.456449 4844 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.457895 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.457975 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.457992 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.458013 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.458054 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:04Z","lastTransitionTime":"2026-01-26T12:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.471359 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.504374 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:04 crc kubenswrapper[4844]: E0126 12:44:04.520206 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.525826 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.525878 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.525890 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.525907 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.525919 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:04Z","lastTransitionTime":"2026-01-26T12:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.528351 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:04 crc kubenswrapper[4844]: E0126 12:44:04.543393 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.544705 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.547017 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.547063 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.547078 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.547097 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.547110 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:04Z","lastTransitionTime":"2026-01-26T12:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.558169 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:04 crc kubenswrapper[4844]: E0126 12:44:04.558943 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.565215 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.565255 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.565267 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.565283 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.565295 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:04Z","lastTransitionTime":"2026-01-26T12:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.572263 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:04 crc kubenswrapper[4844]: E0126 12:44:04.577458 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.580775 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.580822 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.580837 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.580853 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.580866 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:04Z","lastTransitionTime":"2026-01-26T12:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.582801 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:04 crc kubenswrapper[4844]: E0126 12:44:04.592139 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:04 crc kubenswrapper[4844]: E0126 12:44:04.592266 4844 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.593877 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.593911 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.593921 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.593934 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.593944 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:04Z","lastTransitionTime":"2026-01-26T12:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.604218 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:04Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.696234 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.696320 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.696340 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.696367 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.696384 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:04Z","lastTransitionTime":"2026-01-26T12:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.798745 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.798806 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.798824 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.798849 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.798867 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:04Z","lastTransitionTime":"2026-01-26T12:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.901777 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.901823 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.901838 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.901863 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.901879 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:04Z","lastTransitionTime":"2026-01-26T12:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.998692 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:44:04 crc kubenswrapper[4844]: E0126 12:44:04.998848 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:44:08.998817791 +0000 UTC m=+25.932185433 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.998908 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:04 crc kubenswrapper[4844]: I0126 12:44:04.998962 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:04 crc kubenswrapper[4844]: E0126 12:44:04.999098 4844 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:44:04 crc kubenswrapper[4844]: E0126 12:44:04.999155 4844 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:44:04 crc kubenswrapper[4844]: E0126 12:44:04.999178 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:08.999160799 +0000 UTC m=+25.932528441 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:44:04 crc kubenswrapper[4844]: E0126 12:44:04.999281 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:08.999255331 +0000 UTC m=+25.932622983 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.004829 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.004888 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.004911 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.004940 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.004961 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:05Z","lastTransitionTime":"2026-01-26T12:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.100147 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.100195 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:05 crc kubenswrapper[4844]: E0126 12:44:05.100325 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:44:05 crc kubenswrapper[4844]: E0126 12:44:05.100326 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:44:05 crc kubenswrapper[4844]: E0126 12:44:05.100376 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:44:05 crc kubenswrapper[4844]: E0126 12:44:05.100343 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:44:05 crc kubenswrapper[4844]: E0126 12:44:05.100396 4844 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:05 crc kubenswrapper[4844]: E0126 12:44:05.100398 4844 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:05 crc kubenswrapper[4844]: E0126 12:44:05.100463 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:09.100445101 +0000 UTC m=+26.033812733 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:05 crc kubenswrapper[4844]: E0126 12:44:05.100481 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:09.100472551 +0000 UTC m=+26.033840183 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.107939 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.107981 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.107997 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.108015 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.108028 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:05Z","lastTransitionTime":"2026-01-26T12:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.210980 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.211037 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.211052 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.211068 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.211079 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:05Z","lastTransitionTime":"2026-01-26T12:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.278656 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:06:01.546970713 +0000 UTC Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.312315 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.312352 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:05 crc kubenswrapper[4844]: E0126 12:44:05.312535 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.312583 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:05 crc kubenswrapper[4844]: E0126 12:44:05.312707 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:05 crc kubenswrapper[4844]: E0126 12:44:05.312841 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.314230 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.314329 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.314359 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.314388 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.314411 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:05Z","lastTransitionTime":"2026-01-26T12:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.417296 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.417343 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.417352 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.417366 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.417375 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:05Z","lastTransitionTime":"2026-01-26T12:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.520406 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.520483 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.520508 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.520541 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.520567 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:05Z","lastTransitionTime":"2026-01-26T12:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.623572 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.623736 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.623761 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.623785 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.623809 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:05Z","lastTransitionTime":"2026-01-26T12:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.726901 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.726966 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.726983 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.727012 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.727030 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:05Z","lastTransitionTime":"2026-01-26T12:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.830352 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.830441 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.830467 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.830502 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.830543 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:05Z","lastTransitionTime":"2026-01-26T12:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.933621 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.933692 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.933714 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.933736 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:05 crc kubenswrapper[4844]: I0126 12:44:05.933751 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:05Z","lastTransitionTime":"2026-01-26T12:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.036529 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.036576 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.036585 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.036622 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.036632 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:06Z","lastTransitionTime":"2026-01-26T12:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.140282 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.140353 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.140372 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.140396 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.140421 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:06Z","lastTransitionTime":"2026-01-26T12:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.242748 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.242819 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.242837 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.242861 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.242881 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:06Z","lastTransitionTime":"2026-01-26T12:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.279113 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:01:31.853283053 +0000 UTC Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.346393 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.346468 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.346494 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.346569 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.346594 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:06Z","lastTransitionTime":"2026-01-26T12:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.449842 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.449889 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.449900 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.449918 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.449929 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:06Z","lastTransitionTime":"2026-01-26T12:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.552480 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.552568 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.552579 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.552610 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.552621 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:06Z","lastTransitionTime":"2026-01-26T12:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.655462 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.655515 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.655533 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.655563 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.655587 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:06Z","lastTransitionTime":"2026-01-26T12:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.758516 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.758587 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.758620 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.758638 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.758653 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:06Z","lastTransitionTime":"2026-01-26T12:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.861993 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.862074 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.862101 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.862137 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.862163 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:06Z","lastTransitionTime":"2026-01-26T12:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.965446 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.965505 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.965540 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.965565 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:06 crc kubenswrapper[4844]: I0126 12:44:06.965583 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:06Z","lastTransitionTime":"2026-01-26T12:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.068059 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.068098 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.068108 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.068125 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.068137 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:07Z","lastTransitionTime":"2026-01-26T12:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.174335 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.174402 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.174423 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.174449 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.174468 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:07Z","lastTransitionTime":"2026-01-26T12:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.281412 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 21:41:25.969404449 +0000 UTC Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.294379 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.294448 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.294467 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.294496 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.294516 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:07Z","lastTransitionTime":"2026-01-26T12:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.316805 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.316932 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:07 crc kubenswrapper[4844]: E0126 12:44:07.317062 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.316830 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:07 crc kubenswrapper[4844]: E0126 12:44:07.317205 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:07 crc kubenswrapper[4844]: E0126 12:44:07.317383 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.396954 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.396998 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.397011 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.397028 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.397042 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:07Z","lastTransitionTime":"2026-01-26T12:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.411105 4844 csr.go:261] certificate signing request csr-r6crb is approved, waiting to be issued Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.438923 4844 csr.go:257] certificate signing request csr-r6crb is issued Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.499960 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.500004 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.500018 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.500035 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.500047 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:07Z","lastTransitionTime":"2026-01-26T12:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.601651 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.601703 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.601716 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.601735 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.601748 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:07Z","lastTransitionTime":"2026-01-26T12:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.703659 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.703708 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.703721 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.703741 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.703755 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:07Z","lastTransitionTime":"2026-01-26T12:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.806295 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.806347 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.806359 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.806375 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.806385 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:07Z","lastTransitionTime":"2026-01-26T12:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.908913 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.908953 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.908962 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.908976 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:07 crc kubenswrapper[4844]: I0126 12:44:07.908986 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:07Z","lastTransitionTime":"2026-01-26T12:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.012870 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.012933 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.012952 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.012977 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.012996 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:08Z","lastTransitionTime":"2026-01-26T12:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.115354 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.115406 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.115418 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.115437 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.115451 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:08Z","lastTransitionTime":"2026-01-26T12:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.217575 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.217635 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.217644 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.217659 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.217669 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:08Z","lastTransitionTime":"2026-01-26T12:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.282218 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 13:22:58.114065699 +0000 UTC Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.309305 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-zb9kx"] Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.309712 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.310203 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-94bpf"] Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.310477 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-j7r9j"] Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.310637 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-94bpf" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.310821 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:44:08 crc kubenswrapper[4844]: W0126 12:44:08.311969 4844 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: configmaps "cni-copy-resources" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 26 12:44:08 crc kubenswrapper[4844]: E0126 12:44:08.312022 4844 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-copy-resources\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.312052 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-f6ttt"] Jan 26 12:44:08 crc kubenswrapper[4844]: W0126 12:44:08.312464 4844 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: secrets "default-dockercfg-2q5b6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 26 12:44:08 crc kubenswrapper[4844]: E0126 12:44:08.312488 4844 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-2q5b6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.312678 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: W0126 12:44:08.313037 4844 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 26 12:44:08 crc kubenswrapper[4844]: W0126 12:44:08.313056 4844 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 26 12:44:08 crc kubenswrapper[4844]: E0126 12:44:08.313079 4844 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 12:44:08 crc kubenswrapper[4844]: E0126 12:44:08.313083 4844 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 12:44:08 crc kubenswrapper[4844]: W0126 12:44:08.313031 4844 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Jan 26 12:44:08 crc kubenswrapper[4844]: E0126 12:44:08.313114 4844 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 12:44:08 crc kubenswrapper[4844]: W0126 12:44:08.314024 4844 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 26 12:44:08 crc kubenswrapper[4844]: E0126 12:44:08.314079 4844 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 12:44:08 crc kubenswrapper[4844]: W0126 12:44:08.314162 4844 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 26 12:44:08 crc kubenswrapper[4844]: E0126 12:44:08.314180 4844 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 12:44:08 crc kubenswrapper[4844]: W0126 12:44:08.314233 4844 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 26 12:44:08 crc kubenswrapper[4844]: E0126 12:44:08.314246 4844 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 12:44:08 crc kubenswrapper[4844]: W0126 12:44:08.314360 4844 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 26 12:44:08 crc kubenswrapper[4844]: E0126 12:44:08.314377 4844 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 12:44:08 crc kubenswrapper[4844]: W0126 12:44:08.315428 4844 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Jan 26 12:44:08 crc kubenswrapper[4844]: E0126 12:44:08.315473 4844 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.315567 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.315723 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.316857 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.316899 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.317648 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.319567 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.319618 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.319630 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.319647 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.319689 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:08Z","lastTransitionTime":"2026-01-26T12:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.343781 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.363672 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.386592 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.393531 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.398404 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.403650 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.411198 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.421874 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.421909 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.421920 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.421937 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.421949 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:08Z","lastTransitionTime":"2026-01-26T12:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.427761 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430410 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e0ad2def-b040-48db-be8a-19f66df2c0f2-os-release\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430462 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qn7g\" (UniqueName: \"kubernetes.io/projected/e0ad2def-b040-48db-be8a-19f66df2c0f2-kube-api-access-7qn7g\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430484 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-system-cni-dir\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430504 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/467433a4-64be-4a14-beb2-657370e9865f-multus-daemon-config\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430523 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e0ad2def-b040-48db-be8a-19f66df2c0f2-cnibin\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430540 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e0ad2def-b040-48db-be8a-19f66df2c0f2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430558 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-multus-conf-dir\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430574 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3602fc7-397b-4d73-ab0c-45acc047397b-proxy-tls\") pod \"machine-config-daemon-j7r9j\" (UID: \"e3602fc7-397b-4d73-ab0c-45acc047397b\") " pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430593 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e0ad2def-b040-48db-be8a-19f66df2c0f2-cni-binary-copy\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430627 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-host-var-lib-cni-multus\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430643 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v76sw\" (UniqueName: \"kubernetes.io/projected/467433a4-64be-4a14-beb2-657370e9865f-kube-api-access-v76sw\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430660 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e0ad2def-b040-48db-be8a-19f66df2c0f2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430677 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-cnibin\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430691 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-host-var-lib-cni-bin\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430747 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29xcb\" (UniqueName: \"kubernetes.io/projected/e3602fc7-397b-4d73-ab0c-45acc047397b-kube-api-access-29xcb\") pod \"machine-config-daemon-j7r9j\" (UID: \"e3602fc7-397b-4d73-ab0c-45acc047397b\") " pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430806 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-host-run-k8s-cni-cncf-io\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430827 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-host-run-netns\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430843 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-host-run-multus-certs\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430863 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-multus-socket-dir-parent\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430892 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e3602fc7-397b-4d73-ab0c-45acc047397b-rootfs\") pod \"machine-config-daemon-j7r9j\" (UID: \"e3602fc7-397b-4d73-ab0c-45acc047397b\") " pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.430959 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e0ad2def-b040-48db-be8a-19f66df2c0f2-system-cni-dir\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.431034 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-456bf\" (UniqueName: \"kubernetes.io/projected/14600b66-6352-4f5e-9c09-eb2548503555-kube-api-access-456bf\") pod \"node-resolver-94bpf\" (UID: \"14600b66-6352-4f5e-9c09-eb2548503555\") " pod="openshift-dns/node-resolver-94bpf" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.431056 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-hostroot\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.431073 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-etc-kubernetes\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.431089 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e3602fc7-397b-4d73-ab0c-45acc047397b-mcd-auth-proxy-config\") pod \"machine-config-daemon-j7r9j\" (UID: \"e3602fc7-397b-4d73-ab0c-45acc047397b\") " pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.431121 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-multus-cni-dir\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.431145 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/14600b66-6352-4f5e-9c09-eb2548503555-hosts-file\") pod \"node-resolver-94bpf\" (UID: \"14600b66-6352-4f5e-9c09-eb2548503555\") " pod="openshift-dns/node-resolver-94bpf" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.431167 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-os-release\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.431183 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/467433a4-64be-4a14-beb2-657370e9865f-cni-binary-copy\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.431202 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-host-var-lib-kubelet\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.439896 4844 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 12:39:07 +0000 UTC, rotation deadline is 2026-12-03 07:26:25.927084944 +0000 UTC Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.439942 4844 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7458h42m17.487145484s for next certificate rotation Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.446702 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.463704 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.474125 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.487622 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.498753 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.512657 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.523909 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.523941 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.523952 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.523968 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.523978 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:08Z","lastTransitionTime":"2026-01-26T12:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.527446 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532003 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-os-release\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532049 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/467433a4-64be-4a14-beb2-657370e9865f-cni-binary-copy\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532080 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-host-var-lib-kubelet\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532103 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-system-cni-dir\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532112 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-os-release\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532123 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/467433a4-64be-4a14-beb2-657370e9865f-multus-daemon-config\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532145 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e0ad2def-b040-48db-be8a-19f66df2c0f2-os-release\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532167 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qn7g\" (UniqueName: \"kubernetes.io/projected/e0ad2def-b040-48db-be8a-19f66df2c0f2-kube-api-access-7qn7g\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532171 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-host-var-lib-kubelet\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532191 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e0ad2def-b040-48db-be8a-19f66df2c0f2-cnibin\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532213 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e0ad2def-b040-48db-be8a-19f66df2c0f2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532234 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-multus-conf-dir\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532237 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e0ad2def-b040-48db-be8a-19f66df2c0f2-os-release\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532250 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-system-cni-dir\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532285 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-multus-conf-dir\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532257 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-host-var-lib-cni-multus\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532290 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-host-var-lib-cni-multus\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532350 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e0ad2def-b040-48db-be8a-19f66df2c0f2-cnibin\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532427 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v76sw\" (UniqueName: \"kubernetes.io/projected/467433a4-64be-4a14-beb2-657370e9865f-kube-api-access-v76sw\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532457 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3602fc7-397b-4d73-ab0c-45acc047397b-proxy-tls\") pod \"machine-config-daemon-j7r9j\" (UID: \"e3602fc7-397b-4d73-ab0c-45acc047397b\") " pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532477 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e0ad2def-b040-48db-be8a-19f66df2c0f2-cni-binary-copy\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532493 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e0ad2def-b040-48db-be8a-19f66df2c0f2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532513 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-cnibin\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532528 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-host-var-lib-cni-bin\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532544 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29xcb\" (UniqueName: \"kubernetes.io/projected/e3602fc7-397b-4d73-ab0c-45acc047397b-kube-api-access-29xcb\") pod \"machine-config-daemon-j7r9j\" (UID: \"e3602fc7-397b-4d73-ab0c-45acc047397b\") " pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532563 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-host-run-k8s-cni-cncf-io\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532580 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-host-run-netns\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532611 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-host-run-multus-certs\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532618 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-host-var-lib-cni-bin\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532632 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-multus-socket-dir-parent\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532655 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-cnibin\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532672 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-multus-socket-dir-parent\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532673 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-host-run-k8s-cni-cncf-io\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532665 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e0ad2def-b040-48db-be8a-19f66df2c0f2-system-cni-dir\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532701 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e0ad2def-b040-48db-be8a-19f66df2c0f2-system-cni-dir\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532737 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-host-run-multus-certs\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532751 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-host-run-netns\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532825 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e3602fc7-397b-4d73-ab0c-45acc047397b-rootfs\") pod \"machine-config-daemon-j7r9j\" (UID: \"e3602fc7-397b-4d73-ab0c-45acc047397b\") " pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532858 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-456bf\" (UniqueName: \"kubernetes.io/projected/14600b66-6352-4f5e-9c09-eb2548503555-kube-api-access-456bf\") pod \"node-resolver-94bpf\" (UID: \"14600b66-6352-4f5e-9c09-eb2548503555\") " pod="openshift-dns/node-resolver-94bpf" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532881 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e3602fc7-397b-4d73-ab0c-45acc047397b-mcd-auth-proxy-config\") pod \"machine-config-daemon-j7r9j\" (UID: \"e3602fc7-397b-4d73-ab0c-45acc047397b\") " pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532862 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e3602fc7-397b-4d73-ab0c-45acc047397b-rootfs\") pod \"machine-config-daemon-j7r9j\" (UID: \"e3602fc7-397b-4d73-ab0c-45acc047397b\") " pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532910 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-multus-cni-dir\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532929 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-hostroot\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532931 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e0ad2def-b040-48db-be8a-19f66df2c0f2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532944 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-etc-kubernetes\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532969 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-etc-kubernetes\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.532996 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/14600b66-6352-4f5e-9c09-eb2548503555-hosts-file\") pod \"node-resolver-94bpf\" (UID: \"14600b66-6352-4f5e-9c09-eb2548503555\") " pod="openshift-dns/node-resolver-94bpf" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.533016 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-hostroot\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.533060 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/467433a4-64be-4a14-beb2-657370e9865f-multus-daemon-config\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.533096 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/14600b66-6352-4f5e-9c09-eb2548503555-hosts-file\") pod \"node-resolver-94bpf\" (UID: \"14600b66-6352-4f5e-9c09-eb2548503555\") " pod="openshift-dns/node-resolver-94bpf" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.533159 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/467433a4-64be-4a14-beb2-657370e9865f-multus-cni-dir\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.533188 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e0ad2def-b040-48db-be8a-19f66df2c0f2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.537573 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3602fc7-397b-4d73-ab0c-45acc047397b-proxy-tls\") pod \"machine-config-daemon-j7r9j\" (UID: \"e3602fc7-397b-4d73-ab0c-45acc047397b\") " pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.543130 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.549263 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29xcb\" (UniqueName: \"kubernetes.io/projected/e3602fc7-397b-4d73-ab0c-45acc047397b-kube-api-access-29xcb\") pod \"machine-config-daemon-j7r9j\" (UID: \"e3602fc7-397b-4d73-ab0c-45acc047397b\") " pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.554489 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.572807 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.585636 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.596484 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.608332 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.619761 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.626643 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.626678 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.626687 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.626700 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.626710 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:08Z","lastTransitionTime":"2026-01-26T12:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.633110 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.646858 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.658315 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.685751 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rlvx4"] Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.686695 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.688864 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.689208 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.689875 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.693047 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.693331 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.693397 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.693522 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.704551 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.716760 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.729297 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.729357 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.729370 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.729389 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.729401 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:08Z","lastTransitionTime":"2026-01-26T12:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.729467 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734186 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/348a2956-fe61-43b9-858f-ab9c97a2985b-ovn-node-metrics-cert\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734235 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/348a2956-fe61-43b9-858f-ab9c97a2985b-ovnkube-script-lib\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734258 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-slash\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734278 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-etc-openvswitch\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734311 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-var-lib-openvswitch\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734331 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-cni-bin\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734376 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-run-ovn\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734401 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-run-systemd\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734422 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-run-ovn-kubernetes\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734446 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-log-socket\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734466 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/348a2956-fe61-43b9-858f-ab9c97a2985b-env-overrides\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734485 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-run-openvswitch\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734505 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvtf5\" (UniqueName: \"kubernetes.io/projected/348a2956-fe61-43b9-858f-ab9c97a2985b-kube-api-access-cvtf5\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734558 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-kubelet\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734574 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/348a2956-fe61-43b9-858f-ab9c97a2985b-ovnkube-config\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734665 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-systemd-units\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734691 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-run-netns\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734709 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-node-log\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734733 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-cni-netd\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.734775 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.742512 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.756272 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.767404 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.779858 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.791552 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.802822 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.816902 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.832414 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.832473 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.832486 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.832504 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.832518 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:08Z","lastTransitionTime":"2026-01-26T12:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836242 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-run-openvswitch\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836294 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-log-socket\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836320 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/348a2956-fe61-43b9-858f-ab9c97a2985b-env-overrides\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836347 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvtf5\" (UniqueName: \"kubernetes.io/projected/348a2956-fe61-43b9-858f-ab9c97a2985b-kube-api-access-cvtf5\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836382 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-run-openvswitch\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836421 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/348a2956-fe61-43b9-858f-ab9c97a2985b-ovnkube-config\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836429 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-log-socket\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836461 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-kubelet\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836516 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-systemd-units\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836540 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-run-netns\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836562 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-node-log\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836584 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-cni-netd\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836632 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836658 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-slash\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836681 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-etc-openvswitch\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836703 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/348a2956-fe61-43b9-858f-ab9c97a2985b-ovn-node-metrics-cert\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836725 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/348a2956-fe61-43b9-858f-ab9c97a2985b-ovnkube-script-lib\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836759 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-var-lib-openvswitch\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836768 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-cni-netd\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836783 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-cni-bin\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836802 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-kubelet\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836822 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-run-ovn\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836828 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-systemd-units\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836844 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-run-systemd\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836852 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-run-netns\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836865 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-run-ovn-kubernetes\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836875 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-node-log\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836946 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-run-systemd\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836958 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-run-ovn-kubernetes\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836962 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-etc-openvswitch\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.836990 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-var-lib-openvswitch\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.837417 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-cni-bin\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.837460 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/348a2956-fe61-43b9-858f-ab9c97a2985b-env-overrides\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.837493 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.837516 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-run-ovn\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.837546 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-slash\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.837568 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/348a2956-fe61-43b9-858f-ab9c97a2985b-ovnkube-script-lib\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.837747 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/348a2956-fe61-43b9-858f-ab9c97a2985b-ovnkube-config\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.839912 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.840791 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/348a2956-fe61-43b9-858f-ab9c97a2985b-ovn-node-metrics-cert\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.857374 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvtf5\" (UniqueName: \"kubernetes.io/projected/348a2956-fe61-43b9-858f-ab9c97a2985b-kube-api-access-cvtf5\") pod \"ovnkube-node-rlvx4\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.866349 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.878756 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.888987 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:08Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.934828 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.934864 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.934873 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.934887 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:08 crc kubenswrapper[4844]: I0126 12:44:08.934897 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:08Z","lastTransitionTime":"2026-01-26T12:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.002270 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:09 crc kubenswrapper[4844]: W0126 12:44:09.014555 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod348a2956_fe61_43b9_858f_ab9c97a2985b.slice/crio-7674f5ff5bb5075f6bac48046c452ef62f888002046f90d70e9c4ac945d744a2 WatchSource:0}: Error finding container 7674f5ff5bb5075f6bac48046c452ef62f888002046f90d70e9c4ac945d744a2: Status 404 returned error can't find the container with id 7674f5ff5bb5075f6bac48046c452ef62f888002046f90d70e9c4ac945d744a2 Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.037725 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.037837 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.037897 4844 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.037903 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:44:17.037874997 +0000 UTC m=+33.971242609 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.037977 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:17.037953359 +0000 UTC m=+33.971320971 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.038044 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.038210 4844 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.038256 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:17.038242736 +0000 UTC m=+33.971610338 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.039161 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.039191 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.039285 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.039302 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.039311 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:09Z","lastTransitionTime":"2026-01-26T12:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.132366 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.139331 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.139375 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.139506 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.139535 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.139547 4844 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.139540 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.139570 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.139581 4844 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.139624 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:17.139605139 +0000 UTC m=+34.072972751 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.139666 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:17.13963336 +0000 UTC m=+34.073000972 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.142624 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.142664 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.142676 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.142693 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.142705 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:09Z","lastTransitionTime":"2026-01-26T12:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.185503 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.245859 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.245937 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.245963 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.245993 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.246013 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:09Z","lastTransitionTime":"2026-01-26T12:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.265735 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.282980 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:50:16.558320339 +0000 UTC Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.312924 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.312942 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.313025 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.313181 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.313278 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.313358 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.353911 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.353990 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.354193 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.354212 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.354225 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:09Z","lastTransitionTime":"2026-01-26T12:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.418117 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.439558 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerStarted","Data":"7674f5ff5bb5075f6bac48046c452ef62f888002046f90d70e9c4ac945d744a2"} Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.485896 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.486425 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.486482 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.486494 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.486514 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.486526 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:09Z","lastTransitionTime":"2026-01-26T12:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.493896 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qn7g\" (UniqueName: \"kubernetes.io/projected/e0ad2def-b040-48db-be8a-19f66df2c0f2-kube-api-access-7qn7g\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.496892 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v76sw\" (UniqueName: \"kubernetes.io/projected/467433a4-64be-4a14-beb2-657370e9865f-kube-api-access-v76sw\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.515075 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.532660 4844 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.532697 4844 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.532760 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/467433a4-64be-4a14-beb2-657370e9865f-cni-binary-copy podName:467433a4-64be-4a14-beb2-657370e9865f nodeName:}" failed. No retries permitted until 2026-01-26 12:44:10.032738699 +0000 UTC m=+26.966106311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/467433a4-64be-4a14-beb2-657370e9865f-cni-binary-copy") pod "multus-zb9kx" (UID: "467433a4-64be-4a14-beb2-657370e9865f") : failed to sync configmap cache: timed out waiting for the condition Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.532811 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e0ad2def-b040-48db-be8a-19f66df2c0f2-cni-binary-copy podName:e0ad2def-b040-48db-be8a-19f66df2c0f2 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:10.03277811 +0000 UTC m=+26.966145922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/e0ad2def-b040-48db-be8a-19f66df2c0f2-cni-binary-copy") pod "multus-additional-cni-plugins-f6ttt" (UID: "e0ad2def-b040-48db-be8a-19f66df2c0f2") : failed to sync configmap cache: timed out waiting for the condition Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.533076 4844 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 26 12:44:09 crc kubenswrapper[4844]: E0126 12:44:09.533127 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3602fc7-397b-4d73-ab0c-45acc047397b-mcd-auth-proxy-config podName:e3602fc7-397b-4d73-ab0c-45acc047397b nodeName:}" failed. No retries permitted until 2026-01-26 12:44:10.033114507 +0000 UTC m=+26.966482339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/e3602fc7-397b-4d73-ab0c-45acc047397b-mcd-auth-proxy-config") pod "machine-config-daemon-j7r9j" (UID: "e3602fc7-397b-4d73-ab0c-45acc047397b") : failed to sync configmap cache: timed out waiting for the condition Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.588288 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.588326 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.588336 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.588350 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.588362 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:09Z","lastTransitionTime":"2026-01-26T12:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.667816 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.690124 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.690171 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.690182 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.690201 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.690216 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:09Z","lastTransitionTime":"2026-01-26T12:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.777263 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.788321 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.791277 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-456bf\" (UniqueName: \"kubernetes.io/projected/14600b66-6352-4f5e-9c09-eb2548503555-kube-api-access-456bf\") pod \"node-resolver-94bpf\" (UID: \"14600b66-6352-4f5e-9c09-eb2548503555\") " pod="openshift-dns/node-resolver-94bpf" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.792576 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.792626 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.792642 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.792661 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.792672 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:09Z","lastTransitionTime":"2026-01-26T12:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.837729 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-94bpf" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.859147 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 12:44:09 crc kubenswrapper[4844]: W0126 12:44:09.879553 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14600b66_6352_4f5e_9c09_eb2548503555.slice/crio-0210a7b835daf4540473d5472fca5378491866adaaa787dd69da1ebe5016efce WatchSource:0}: Error finding container 0210a7b835daf4540473d5472fca5378491866adaaa787dd69da1ebe5016efce: Status 404 returned error can't find the container with id 0210a7b835daf4540473d5472fca5378491866adaaa787dd69da1ebe5016efce Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.895450 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.895488 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.895497 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.895514 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:09 crc kubenswrapper[4844]: I0126 12:44:09.895526 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:09Z","lastTransitionTime":"2026-01-26T12:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.006849 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.006916 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.006929 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.006947 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.006958 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:10Z","lastTransitionTime":"2026-01-26T12:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.050576 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e0ad2def-b040-48db-be8a-19f66df2c0f2-cni-binary-copy\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.050681 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e3602fc7-397b-4d73-ab0c-45acc047397b-mcd-auth-proxy-config\") pod \"machine-config-daemon-j7r9j\" (UID: \"e3602fc7-397b-4d73-ab0c-45acc047397b\") " pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.050721 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/467433a4-64be-4a14-beb2-657370e9865f-cni-binary-copy\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.051631 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/467433a4-64be-4a14-beb2-657370e9865f-cni-binary-copy\") pod \"multus-zb9kx\" (UID: \"467433a4-64be-4a14-beb2-657370e9865f\") " pod="openshift-multus/multus-zb9kx" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.051727 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e3602fc7-397b-4d73-ab0c-45acc047397b-mcd-auth-proxy-config\") pod \"machine-config-daemon-j7r9j\" (UID: \"e3602fc7-397b-4d73-ab0c-45acc047397b\") " pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.051861 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e0ad2def-b040-48db-be8a-19f66df2c0f2-cni-binary-copy\") pod \"multus-additional-cni-plugins-f6ttt\" (UID: \"e0ad2def-b040-48db-be8a-19f66df2c0f2\") " pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.110296 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.110339 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.110348 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.110368 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.110380 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:10Z","lastTransitionTime":"2026-01-26T12:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.129433 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zb9kx" Jan 26 12:44:10 crc kubenswrapper[4844]: W0126 12:44:10.139798 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod467433a4_64be_4a14_beb2_657370e9865f.slice/crio-204dbb8076e5dcbd5dbfe8c33c5a0daea7c6f2c9563dee0c00515aef9efb3b9b WatchSource:0}: Error finding container 204dbb8076e5dcbd5dbfe8c33c5a0daea7c6f2c9563dee0c00515aef9efb3b9b: Status 404 returned error can't find the container with id 204dbb8076e5dcbd5dbfe8c33c5a0daea7c6f2c9563dee0c00515aef9efb3b9b Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.144857 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.151877 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" Jan 26 12:44:10 crc kubenswrapper[4844]: W0126 12:44:10.163385 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3602fc7_397b_4d73_ab0c_45acc047397b.slice/crio-3a86937ce13537c2eb7696c5495d57206a16e5c183134e56ffd7099045df24e2 WatchSource:0}: Error finding container 3a86937ce13537c2eb7696c5495d57206a16e5c183134e56ffd7099045df24e2: Status 404 returned error can't find the container with id 3a86937ce13537c2eb7696c5495d57206a16e5c183134e56ffd7099045df24e2 Jan 26 12:44:10 crc kubenswrapper[4844]: W0126 12:44:10.181658 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0ad2def_b040_48db_be8a_19f66df2c0f2.slice/crio-091917ac49cd9a08f568929065bf99341377a3cf8d19438b18bb584eff499369 WatchSource:0}: Error finding container 091917ac49cd9a08f568929065bf99341377a3cf8d19438b18bb584eff499369: Status 404 returned error can't find the container with id 091917ac49cd9a08f568929065bf99341377a3cf8d19438b18bb584eff499369 Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.213664 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.213691 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.213701 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.213715 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.213724 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:10Z","lastTransitionTime":"2026-01-26T12:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.283717 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 19:29:09.984903663 +0000 UTC Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.316285 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.316329 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.316342 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.316357 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.316368 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:10Z","lastTransitionTime":"2026-01-26T12:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.420733 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.420782 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.420794 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.420814 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.420824 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:10Z","lastTransitionTime":"2026-01-26T12:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.442913 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" event={"ID":"e0ad2def-b040-48db-be8a-19f66df2c0f2","Type":"ContainerStarted","Data":"091917ac49cd9a08f568929065bf99341377a3cf8d19438b18bb584eff499369"} Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.443724 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zb9kx" event={"ID":"467433a4-64be-4a14-beb2-657370e9865f","Type":"ContainerStarted","Data":"204dbb8076e5dcbd5dbfe8c33c5a0daea7c6f2c9563dee0c00515aef9efb3b9b"} Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.445290 4844 generic.go:334] "Generic (PLEG): container finished" podID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerID="370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02" exitCode=0 Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.445363 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerDied","Data":"370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02"} Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.446515 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-94bpf" event={"ID":"14600b66-6352-4f5e-9c09-eb2548503555","Type":"ContainerStarted","Data":"0210a7b835daf4540473d5472fca5378491866adaaa787dd69da1ebe5016efce"} Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.448205 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"3a86937ce13537c2eb7696c5495d57206a16e5c183134e56ffd7099045df24e2"} Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.458932 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:10Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.473887 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:10Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.486638 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:10Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.501101 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:10Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.519855 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:10Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.523823 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.523937 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.523959 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.523973 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.523983 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:10Z","lastTransitionTime":"2026-01-26T12:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.531768 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:10Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.543082 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:10Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.557249 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:10Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.585168 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:10Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.598382 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:10Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.617469 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:10Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.625755 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.625904 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.625995 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.626074 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.626162 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:10Z","lastTransitionTime":"2026-01-26T12:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.637573 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:10Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.649259 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:10Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.658705 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:10Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.728195 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.728235 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.728245 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.728260 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.728274 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:10Z","lastTransitionTime":"2026-01-26T12:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.830178 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.830248 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.830260 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.830281 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.830302 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:10Z","lastTransitionTime":"2026-01-26T12:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.933444 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.933492 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.933506 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.933527 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:10 crc kubenswrapper[4844]: I0126 12:44:10.933543 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:10Z","lastTransitionTime":"2026-01-26T12:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.035915 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.036495 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.036510 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.036544 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.036728 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:11Z","lastTransitionTime":"2026-01-26T12:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.139407 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.139470 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.139480 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.139493 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.139502 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:11Z","lastTransitionTime":"2026-01-26T12:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.242224 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.242276 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.242288 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.242305 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.242316 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:11Z","lastTransitionTime":"2026-01-26T12:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.283960 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 19:52:39.076601558 +0000 UTC Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.312976 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.313017 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.313027 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:11 crc kubenswrapper[4844]: E0126 12:44:11.313141 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:11 crc kubenswrapper[4844]: E0126 12:44:11.313193 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:11 crc kubenswrapper[4844]: E0126 12:44:11.313261 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.346048 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.346088 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.346097 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.346112 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.346125 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:11Z","lastTransitionTime":"2026-01-26T12:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.449397 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.449493 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.449511 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.449534 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.449551 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:11Z","lastTransitionTime":"2026-01-26T12:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.452779 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb"} Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.452826 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2"} Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.455026 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-94bpf" event={"ID":"14600b66-6352-4f5e-9c09-eb2548503555","Type":"ContainerStarted","Data":"1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d"} Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.457727 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerStarted","Data":"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a"} Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.457775 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerStarted","Data":"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7"} Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.459363 4844 generic.go:334] "Generic (PLEG): container finished" podID="e0ad2def-b040-48db-be8a-19f66df2c0f2" containerID="489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745" exitCode=0 Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.459977 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" event={"ID":"e0ad2def-b040-48db-be8a-19f66df2c0f2","Type":"ContainerDied","Data":"489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745"} Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.462295 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zb9kx" event={"ID":"467433a4-64be-4a14-beb2-657370e9865f","Type":"ContainerStarted","Data":"9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb"} Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.471824 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.487375 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.512877 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.543299 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.562828 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.563060 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.563077 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.563100 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.563118 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:11Z","lastTransitionTime":"2026-01-26T12:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.567960 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.582402 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.601995 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.618401 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.635673 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.653754 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.665225 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.665464 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.665528 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.665659 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.665742 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:11Z","lastTransitionTime":"2026-01-26T12:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.669455 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.685082 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.697718 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.710114 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.721941 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.733584 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.747869 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.761251 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.770559 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.770641 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.770656 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.770672 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.770685 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:11Z","lastTransitionTime":"2026-01-26T12:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.778989 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.796726 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.815746 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.830573 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.843584 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.862528 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.873402 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.873436 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.873447 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.873467 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.873479 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:11Z","lastTransitionTime":"2026-01-26T12:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.878611 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.892282 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.916808 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.938754 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:11Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.975924 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.975972 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.975984 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.976000 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:11 crc kubenswrapper[4844]: I0126 12:44:11.976019 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:11Z","lastTransitionTime":"2026-01-26T12:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.078012 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.078069 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.078080 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.078098 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.078110 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:12Z","lastTransitionTime":"2026-01-26T12:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.180748 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.180810 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.180827 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.180849 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.180867 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:12Z","lastTransitionTime":"2026-01-26T12:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.284083 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.284076 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:51:23.644649544 +0000 UTC Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.284131 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.284239 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.284309 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.284779 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:12Z","lastTransitionTime":"2026-01-26T12:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.387204 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.387256 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.387268 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.387290 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.387309 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:12Z","lastTransitionTime":"2026-01-26T12:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.469924 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerStarted","Data":"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745"} Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.470036 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerStarted","Data":"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9"} Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.470067 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerStarted","Data":"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2"} Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.470092 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerStarted","Data":"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265"} Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.472437 4844 generic.go:334] "Generic (PLEG): container finished" podID="e0ad2def-b040-48db-be8a-19f66df2c0f2" containerID="cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c" exitCode=0 Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.472657 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" event={"ID":"e0ad2def-b040-48db-be8a-19f66df2c0f2","Type":"ContainerDied","Data":"cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c"} Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.490185 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.490249 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.490267 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.490288 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.490304 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:12Z","lastTransitionTime":"2026-01-26T12:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.494848 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.508852 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.522625 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.539377 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.558979 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.575227 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.585222 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.595099 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.595146 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.595157 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.595183 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.595196 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:12Z","lastTransitionTime":"2026-01-26T12:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.602560 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.614861 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.627118 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.639031 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.650173 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.663578 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.676203 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:12Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.698187 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.698228 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.698277 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.698290 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.698299 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:12Z","lastTransitionTime":"2026-01-26T12:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.800115 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.800155 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.800191 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.800206 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.800219 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:12Z","lastTransitionTime":"2026-01-26T12:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.902784 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.902824 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.902832 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.902845 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:12 crc kubenswrapper[4844]: I0126 12:44:12.902855 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:12Z","lastTransitionTime":"2026-01-26T12:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.015138 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.015180 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.015192 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.015209 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.015223 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:13Z","lastTransitionTime":"2026-01-26T12:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.106474 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-7wd9k"] Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.107365 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7wd9k" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.110290 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.110419 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.110657 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.111395 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.117445 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.117481 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.117493 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.117509 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.117521 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:13Z","lastTransitionTime":"2026-01-26T12:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.128858 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.143249 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.162967 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.183422 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.187295 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/046bb01b-89ef-40e9-bbbd-83b5f2d2cf96-serviceca\") pod \"node-ca-7wd9k\" (UID: \"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\") " pod="openshift-image-registry/node-ca-7wd9k" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.187330 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/046bb01b-89ef-40e9-bbbd-83b5f2d2cf96-host\") pod \"node-ca-7wd9k\" (UID: \"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\") " pod="openshift-image-registry/node-ca-7wd9k" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.187349 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h4zk\" (UniqueName: \"kubernetes.io/projected/046bb01b-89ef-40e9-bbbd-83b5f2d2cf96-kube-api-access-8h4zk\") pod \"node-ca-7wd9k\" (UID: \"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\") " pod="openshift-image-registry/node-ca-7wd9k" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.195225 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.200634 4844 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 12:44:13 crc kubenswrapper[4844]: W0126 12:44:13.202500 4844 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 26 12:44:13 crc kubenswrapper[4844]: W0126 12:44:13.202549 4844 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 12:44:13 crc kubenswrapper[4844]: W0126 12:44:13.202576 4844 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 26 12:44:13 crc kubenswrapper[4844]: W0126 12:44:13.202806 4844 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.203555 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/iptables-alerter-4ln5h/status\": read tcp 38.102.83.142:45722->38.102.83.142:6443: use of closed network connection" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.219288 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.219345 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.219357 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.219374 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.219384 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:13Z","lastTransitionTime":"2026-01-26T12:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.233456 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.248391 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.261805 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.282651 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.284517 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 09:15:15.232876084 +0000 UTC Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.287786 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/046bb01b-89ef-40e9-bbbd-83b5f2d2cf96-serviceca\") pod \"node-ca-7wd9k\" (UID: \"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\") " pod="openshift-image-registry/node-ca-7wd9k" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.287821 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/046bb01b-89ef-40e9-bbbd-83b5f2d2cf96-host\") pod \"node-ca-7wd9k\" (UID: \"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\") " pod="openshift-image-registry/node-ca-7wd9k" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.287844 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h4zk\" (UniqueName: \"kubernetes.io/projected/046bb01b-89ef-40e9-bbbd-83b5f2d2cf96-kube-api-access-8h4zk\") pod \"node-ca-7wd9k\" (UID: \"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\") " pod="openshift-image-registry/node-ca-7wd9k" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.288108 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/046bb01b-89ef-40e9-bbbd-83b5f2d2cf96-host\") pod \"node-ca-7wd9k\" (UID: \"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\") " pod="openshift-image-registry/node-ca-7wd9k" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.288905 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/046bb01b-89ef-40e9-bbbd-83b5f2d2cf96-serviceca\") pod \"node-ca-7wd9k\" (UID: \"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\") " pod="openshift-image-registry/node-ca-7wd9k" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.304069 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.307704 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h4zk\" (UniqueName: \"kubernetes.io/projected/046bb01b-89ef-40e9-bbbd-83b5f2d2cf96-kube-api-access-8h4zk\") pod \"node-ca-7wd9k\" (UID: \"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\") " pod="openshift-image-registry/node-ca-7wd9k" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.312796 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:13 crc kubenswrapper[4844]: E0126 12:44:13.312898 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.313153 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:13 crc kubenswrapper[4844]: E0126 12:44:13.313196 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.313233 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:13 crc kubenswrapper[4844]: E0126 12:44:13.313282 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.315983 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.321586 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.321797 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.321923 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.322010 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.322084 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:13Z","lastTransitionTime":"2026-01-26T12:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.329355 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.345662 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.369422 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.383353 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.396073 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.407353 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.419072 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.423788 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7wd9k" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.424113 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.424352 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.424365 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.424381 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.424394 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:13Z","lastTransitionTime":"2026-01-26T12:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.436562 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.456518 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.467533 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.479413 4844 generic.go:334] "Generic (PLEG): container finished" podID="e0ad2def-b040-48db-be8a-19f66df2c0f2" containerID="74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede" exitCode=0 Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.479491 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" event={"ID":"e0ad2def-b040-48db-be8a-19f66df2c0f2","Type":"ContainerDied","Data":"74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede"} Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.480302 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7wd9k" event={"ID":"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96","Type":"ContainerStarted","Data":"4322952eca8d24bf48519a5368972793380ab8e45068f814cb85564a0c017515"} Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.489547 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.503318 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.515086 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.529348 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.529586 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.529640 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.529649 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.529665 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.529677 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:13Z","lastTransitionTime":"2026-01-26T12:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.541785 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.554836 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.572182 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.586288 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.599525 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.611202 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.648508 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.657162 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.657205 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.657219 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.657236 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.657247 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:13Z","lastTransitionTime":"2026-01-26T12:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.719130 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.731740 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.755878 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.760248 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.760311 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.760323 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.760343 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.760356 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:13Z","lastTransitionTime":"2026-01-26T12:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.771371 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.783119 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.795747 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.808002 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.822081 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.836949 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.849095 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.860800 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.862281 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.862310 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.862326 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.862343 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.862353 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:13Z","lastTransitionTime":"2026-01-26T12:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.873203 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.965302 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.965350 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.965362 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.965380 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:13 crc kubenswrapper[4844]: I0126 12:44:13.965392 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:13Z","lastTransitionTime":"2026-01-26T12:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.070783 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.071448 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.071461 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.071477 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.071489 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:14Z","lastTransitionTime":"2026-01-26T12:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.174841 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.174897 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.174908 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.174929 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.174939 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:14Z","lastTransitionTime":"2026-01-26T12:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.203061 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.277772 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.277834 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.277847 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.277867 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.277882 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:14Z","lastTransitionTime":"2026-01-26T12:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.285181 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 01:37:54.987502587 +0000 UTC Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.301875 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.380008 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.380060 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.380078 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.380104 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.380124 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:14Z","lastTransitionTime":"2026-01-26T12:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.482465 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.482504 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.482517 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.482532 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.482544 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:14Z","lastTransitionTime":"2026-01-26T12:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.487764 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerStarted","Data":"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d"} Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.491204 4844 generic.go:334] "Generic (PLEG): container finished" podID="e0ad2def-b040-48db-be8a-19f66df2c0f2" containerID="4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d" exitCode=0 Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.491316 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" event={"ID":"e0ad2def-b040-48db-be8a-19f66df2c0f2","Type":"ContainerDied","Data":"4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d"} Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.492731 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7wd9k" event={"ID":"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96","Type":"ContainerStarted","Data":"e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753"} Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.509692 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.526243 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.547144 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.579384 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.585317 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.585372 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.585385 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.585405 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.585420 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:14Z","lastTransitionTime":"2026-01-26T12:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.598663 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.610803 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.629025 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.629268 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.644059 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.654375 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.671309 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.685900 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.688086 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.688111 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.688119 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.688131 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.688141 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:14Z","lastTransitionTime":"2026-01-26T12:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.708052 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.722774 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.735562 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.748273 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.760938 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.780184 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.790716 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.790760 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.790771 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.790788 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.790800 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:14Z","lastTransitionTime":"2026-01-26T12:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.795669 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.809477 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.830358 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.844300 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.851613 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.851657 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.851668 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.851687 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.851700 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:14Z","lastTransitionTime":"2026-01-26T12:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.857640 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: E0126 12:44:14.865840 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.871037 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.871084 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.871099 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.871119 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.871123 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.871135 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:14Z","lastTransitionTime":"2026-01-26T12:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:14 crc kubenswrapper[4844]: E0126 12:44:14.883764 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.885782 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.892805 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.892857 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.892867 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.892884 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.892896 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:14Z","lastTransitionTime":"2026-01-26T12:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:14 crc kubenswrapper[4844]: E0126 12:44:14.903263 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.903797 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.907988 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.908029 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.908042 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.908064 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.908077 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:14Z","lastTransitionTime":"2026-01-26T12:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.919082 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: E0126 12:44:14.921144 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.925058 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.925107 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.925122 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.925142 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.925158 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:14Z","lastTransitionTime":"2026-01-26T12:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:14 crc kubenswrapper[4844]: E0126 12:44:14.939575 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: E0126 12:44:14.939762 4844 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.941489 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.941528 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.941537 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.941556 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.941567 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:14Z","lastTransitionTime":"2026-01-26T12:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.944882 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.957904 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:14 crc kubenswrapper[4844]: I0126 12:44:14.980679 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.000734 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.011560 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.044691 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.044754 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.044764 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.044786 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.044803 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:15Z","lastTransitionTime":"2026-01-26T12:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.147880 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.147920 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.147934 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.147949 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.147959 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:15Z","lastTransitionTime":"2026-01-26T12:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.250689 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.250744 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.250759 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.250778 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.250790 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:15Z","lastTransitionTime":"2026-01-26T12:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.286050 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:55:41.722958991 +0000 UTC Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.312794 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.312876 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.312946 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:15 crc kubenswrapper[4844]: E0126 12:44:15.313086 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:15 crc kubenswrapper[4844]: E0126 12:44:15.313174 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:15 crc kubenswrapper[4844]: E0126 12:44:15.313361 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.353147 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.353192 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.353200 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.353214 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.353224 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:15Z","lastTransitionTime":"2026-01-26T12:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.456706 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.456747 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.456757 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.456773 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.456784 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:15Z","lastTransitionTime":"2026-01-26T12:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.501485 4844 generic.go:334] "Generic (PLEG): container finished" podID="e0ad2def-b040-48db-be8a-19f66df2c0f2" containerID="c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220" exitCode=0 Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.501586 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" event={"ID":"e0ad2def-b040-48db-be8a-19f66df2c0f2","Type":"ContainerDied","Data":"c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220"} Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.519075 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.538586 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.556364 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.560118 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.560158 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.560167 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.560184 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.560196 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:15Z","lastTransitionTime":"2026-01-26T12:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.572957 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.585221 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.599454 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.613778 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.629817 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.644163 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.659831 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.663636 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.663674 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.663685 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.663699 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.663709 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:15Z","lastTransitionTime":"2026-01-26T12:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.679827 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.689565 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.708973 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.724298 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.733678 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.766653 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.767147 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.767162 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.767182 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.767199 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:15Z","lastTransitionTime":"2026-01-26T12:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.869685 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.869739 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.869752 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.869769 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.869780 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:15Z","lastTransitionTime":"2026-01-26T12:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.972386 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.972434 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.972445 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.972462 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:15 crc kubenswrapper[4844]: I0126 12:44:15.972474 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:15Z","lastTransitionTime":"2026-01-26T12:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.075584 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.075697 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.075712 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.075740 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.075755 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:16Z","lastTransitionTime":"2026-01-26T12:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.179081 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.179143 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.179159 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.179181 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.179196 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:16Z","lastTransitionTime":"2026-01-26T12:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.282525 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.282652 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.282692 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.282730 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.282755 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:16Z","lastTransitionTime":"2026-01-26T12:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.286785 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 15:18:24.98823036 +0000 UTC Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.385587 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.385662 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.385675 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.385693 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.385713 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:16Z","lastTransitionTime":"2026-01-26T12:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.488724 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.488776 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.488788 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.488806 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.488818 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:16Z","lastTransitionTime":"2026-01-26T12:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.510202 4844 generic.go:334] "Generic (PLEG): container finished" podID="e0ad2def-b040-48db-be8a-19f66df2c0f2" containerID="810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0" exitCode=0 Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.510256 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" event={"ID":"e0ad2def-b040-48db-be8a-19f66df2c0f2","Type":"ContainerDied","Data":"810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0"} Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.533880 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.551853 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.570033 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.582551 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.591873 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.591917 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.591930 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.591948 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.591963 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:16Z","lastTransitionTime":"2026-01-26T12:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.607239 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.624648 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.638302 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.666819 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.679762 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.689565 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.694789 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.694833 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.694847 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.694865 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.694879 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:16Z","lastTransitionTime":"2026-01-26T12:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.702319 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.714364 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.724894 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.740074 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.751034 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.797387 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.797453 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.797462 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.797477 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.797488 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:16Z","lastTransitionTime":"2026-01-26T12:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.899917 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.899965 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.899974 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.899989 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:16 crc kubenswrapper[4844]: I0126 12:44:16.900001 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:16Z","lastTransitionTime":"2026-01-26T12:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.002361 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.002415 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.002426 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.002442 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.002453 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:17Z","lastTransitionTime":"2026-01-26T12:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.105834 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.105873 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.105883 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.105897 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.105910 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:17Z","lastTransitionTime":"2026-01-26T12:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.128743 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.128987 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:17 crc kubenswrapper[4844]: E0126 12:44:17.129048 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:44:33.129023787 +0000 UTC m=+50.062391419 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.129112 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:17 crc kubenswrapper[4844]: E0126 12:44:17.129138 4844 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:44:17 crc kubenswrapper[4844]: E0126 12:44:17.129224 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:33.129200612 +0000 UTC m=+50.062568274 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:44:17 crc kubenswrapper[4844]: E0126 12:44:17.129407 4844 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:44:17 crc kubenswrapper[4844]: E0126 12:44:17.129501 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:33.129482829 +0000 UTC m=+50.062850441 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.209841 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.209886 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.209897 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.209932 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.209946 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:17Z","lastTransitionTime":"2026-01-26T12:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.230306 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.230347 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:17 crc kubenswrapper[4844]: E0126 12:44:17.230456 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:44:17 crc kubenswrapper[4844]: E0126 12:44:17.230470 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:44:17 crc kubenswrapper[4844]: E0126 12:44:17.230480 4844 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:17 crc kubenswrapper[4844]: E0126 12:44:17.230517 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:33.230505025 +0000 UTC m=+50.163872637 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:17 crc kubenswrapper[4844]: E0126 12:44:17.230819 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:44:17 crc kubenswrapper[4844]: E0126 12:44:17.230832 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:44:17 crc kubenswrapper[4844]: E0126 12:44:17.230841 4844 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:17 crc kubenswrapper[4844]: E0126 12:44:17.230863 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:33.230856853 +0000 UTC m=+50.164224465 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.287659 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:45:54.561123246 +0000 UTC Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.311948 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.311989 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.311999 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.312014 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.312023 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:17Z","lastTransitionTime":"2026-01-26T12:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.312541 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:17 crc kubenswrapper[4844]: E0126 12:44:17.312649 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.312743 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:17 crc kubenswrapper[4844]: E0126 12:44:17.312805 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.312867 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:17 crc kubenswrapper[4844]: E0126 12:44:17.312915 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.414360 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.414408 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.414424 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.414442 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.414453 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:17Z","lastTransitionTime":"2026-01-26T12:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.517180 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.517215 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.517226 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.517241 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.517252 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:17Z","lastTransitionTime":"2026-01-26T12:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.526150 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerStarted","Data":"fd671b046221d3e0fd341253eb58c3ed579fdd50efa41f43f93ced7b9212e792"} Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.526522 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.530690 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" event={"ID":"e0ad2def-b040-48db-be8a-19f66df2c0f2","Type":"ContainerStarted","Data":"1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934"} Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.547739 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.553255 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.560480 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.570640 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.583804 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.597070 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.610825 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.619796 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.619826 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.619836 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.619849 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.619859 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:17Z","lastTransitionTime":"2026-01-26T12:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.625520 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.642240 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.655202 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.667417 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.681837 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.702969 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd671b046221d3e0fd341253eb58c3ed579fdd50efa41f43f93ced7b9212e792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.714497 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.722559 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.722620 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.722633 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.722650 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.722664 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:17Z","lastTransitionTime":"2026-01-26T12:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.727407 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.741701 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.754921 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.768704 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.784511 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.799006 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.812850 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.825105 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.825145 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.825158 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.825174 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.825186 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:17Z","lastTransitionTime":"2026-01-26T12:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.827771 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.842711 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.856246 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.868073 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.883829 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.904775 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd671b046221d3e0fd341253eb58c3ed579fdd50efa41f43f93ced7b9212e792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.915535 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.927002 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.927054 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.927069 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.927087 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.927099 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:17Z","lastTransitionTime":"2026-01-26T12:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.936249 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.950864 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:17 crc kubenswrapper[4844]: I0126 12:44:17.960567 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:17Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.029583 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.029627 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.029636 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.029652 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.029661 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:18Z","lastTransitionTime":"2026-01-26T12:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.132088 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.132118 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.132129 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.132143 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.132152 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:18Z","lastTransitionTime":"2026-01-26T12:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.234377 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.234427 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.234438 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.234460 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.234474 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:18Z","lastTransitionTime":"2026-01-26T12:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.288116 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 01:17:25.841123556 +0000 UTC Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.336801 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.336857 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.336867 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.336887 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.336900 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:18Z","lastTransitionTime":"2026-01-26T12:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.439678 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.439715 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.439723 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.439736 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.439744 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:18Z","lastTransitionTime":"2026-01-26T12:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.533919 4844 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.534677 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.541819 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.541847 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.541857 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.541870 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.541884 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:18Z","lastTransitionTime":"2026-01-26T12:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.563839 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.590001 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.604221 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.617534 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.631106 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.644123 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.644155 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.644164 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.644179 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.644189 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:18Z","lastTransitionTime":"2026-01-26T12:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.653017 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.669507 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.681635 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.696153 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.707395 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.716502 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.726214 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.738847 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.747440 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.747512 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.747528 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.747544 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.747555 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:18Z","lastTransitionTime":"2026-01-26T12:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.750614 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.770749 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.790629 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd671b046221d3e0fd341253eb58c3ed579fdd50efa41f43f93ced7b9212e792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:18Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.849985 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.850042 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.850053 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.850070 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.850082 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:18Z","lastTransitionTime":"2026-01-26T12:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.952628 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.952677 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.952687 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.952699 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:18 crc kubenswrapper[4844]: I0126 12:44:18.952707 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:18Z","lastTransitionTime":"2026-01-26T12:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.055411 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.055448 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.055472 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.055488 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.055498 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:19Z","lastTransitionTime":"2026-01-26T12:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.158061 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.158110 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.158127 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.158150 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.158162 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:19Z","lastTransitionTime":"2026-01-26T12:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.260806 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.260850 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.260864 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.260878 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.260892 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:19Z","lastTransitionTime":"2026-01-26T12:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.289283 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:13:29.266451551 +0000 UTC Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.312665 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.312739 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:19 crc kubenswrapper[4844]: E0126 12:44:19.312773 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:19 crc kubenswrapper[4844]: E0126 12:44:19.312813 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.312732 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:19 crc kubenswrapper[4844]: E0126 12:44:19.312931 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.363097 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.363150 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.363162 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.363180 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.363193 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:19Z","lastTransitionTime":"2026-01-26T12:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.464994 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.465038 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.465046 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.465060 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.465069 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:19Z","lastTransitionTime":"2026-01-26T12:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.535919 4844 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.567075 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.567114 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.567127 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.567142 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.567152 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:19Z","lastTransitionTime":"2026-01-26T12:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.669585 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.669630 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.669638 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.669650 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.669658 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:19Z","lastTransitionTime":"2026-01-26T12:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.724299 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8"] Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.724788 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.727484 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.728034 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.744589 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.756888 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04a4b371-44a9-4805-b60f-6f7ba0fac40b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5qpr8\" (UID: \"04a4b371-44a9-4805-b60f-6f7ba0fac40b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.756935 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04a4b371-44a9-4805-b60f-6f7ba0fac40b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5qpr8\" (UID: \"04a4b371-44a9-4805-b60f-6f7ba0fac40b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.757000 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04a4b371-44a9-4805-b60f-6f7ba0fac40b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5qpr8\" (UID: \"04a4b371-44a9-4805-b60f-6f7ba0fac40b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.757024 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7ckr\" (UniqueName: \"kubernetes.io/projected/04a4b371-44a9-4805-b60f-6f7ba0fac40b-kube-api-access-b7ckr\") pod \"ovnkube-control-plane-749d76644c-5qpr8\" (UID: \"04a4b371-44a9-4805-b60f-6f7ba0fac40b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.758260 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.771839 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.771901 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.771919 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.771943 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.771961 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:19Z","lastTransitionTime":"2026-01-26T12:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.790511 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd671b046221d3e0fd341253eb58c3ed579fdd50efa41f43f93ced7b9212e792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.804911 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.826940 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.843320 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.858487 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04a4b371-44a9-4805-b60f-6f7ba0fac40b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5qpr8\" (UID: \"04a4b371-44a9-4805-b60f-6f7ba0fac40b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.858551 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04a4b371-44a9-4805-b60f-6f7ba0fac40b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5qpr8\" (UID: \"04a4b371-44a9-4805-b60f-6f7ba0fac40b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.858686 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7ckr\" (UniqueName: \"kubernetes.io/projected/04a4b371-44a9-4805-b60f-6f7ba0fac40b-kube-api-access-b7ckr\") pod \"ovnkube-control-plane-749d76644c-5qpr8\" (UID: \"04a4b371-44a9-4805-b60f-6f7ba0fac40b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.858733 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04a4b371-44a9-4805-b60f-6f7ba0fac40b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5qpr8\" (UID: \"04a4b371-44a9-4805-b60f-6f7ba0fac40b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.859668 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04a4b371-44a9-4805-b60f-6f7ba0fac40b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5qpr8\" (UID: \"04a4b371-44a9-4805-b60f-6f7ba0fac40b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.859841 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04a4b371-44a9-4805-b60f-6f7ba0fac40b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5qpr8\" (UID: \"04a4b371-44a9-4805-b60f-6f7ba0fac40b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.861908 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.863923 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04a4b371-44a9-4805-b60f-6f7ba0fac40b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5qpr8\" (UID: \"04a4b371-44a9-4805-b60f-6f7ba0fac40b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.876639 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.876707 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.876720 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.876741 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.876753 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:19Z","lastTransitionTime":"2026-01-26T12:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.884796 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7ckr\" (UniqueName: \"kubernetes.io/projected/04a4b371-44a9-4805-b60f-6f7ba0fac40b-kube-api-access-b7ckr\") pod \"ovnkube-control-plane-749d76644c-5qpr8\" (UID: \"04a4b371-44a9-4805-b60f-6f7ba0fac40b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.895086 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.909968 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.920001 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.930016 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.941387 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.953457 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.965836 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.979209 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:19Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.980432 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.980482 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.980496 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.980513 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:19 crc kubenswrapper[4844]: I0126 12:44:19.980523 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:19Z","lastTransitionTime":"2026-01-26T12:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.017886 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:20Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.045439 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.083416 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.083470 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.083483 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.083500 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.083514 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:20Z","lastTransitionTime":"2026-01-26T12:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.187092 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.187150 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.187169 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.187192 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.187211 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:20Z","lastTransitionTime":"2026-01-26T12:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.289375 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 08:57:32.182855991 +0000 UTC Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.290626 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.290662 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.290671 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.290684 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.290696 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:20Z","lastTransitionTime":"2026-01-26T12:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.394244 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.394329 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.394366 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.394446 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.394469 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:20Z","lastTransitionTime":"2026-01-26T12:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.497147 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.497191 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.497211 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.497232 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.497246 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:20Z","lastTransitionTime":"2026-01-26T12:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.538653 4844 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.600378 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.600454 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.600491 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.600528 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.600550 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:20Z","lastTransitionTime":"2026-01-26T12:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.703826 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.703893 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.703909 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.703934 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.703951 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:20Z","lastTransitionTime":"2026-01-26T12:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.807073 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.807140 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.807158 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.807181 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.807199 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:20Z","lastTransitionTime":"2026-01-26T12:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.909586 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.909638 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.909646 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.909658 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:20 crc kubenswrapper[4844]: I0126 12:44:20.909667 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:20Z","lastTransitionTime":"2026-01-26T12:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.012882 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.012928 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.012946 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.012972 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.012988 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:21Z","lastTransitionTime":"2026-01-26T12:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.116499 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.116555 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.116568 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.116591 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.116628 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:21Z","lastTransitionTime":"2026-01-26T12:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.219359 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.219399 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.219408 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.219423 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.219433 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:21Z","lastTransitionTime":"2026-01-26T12:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.289584 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 10:08:47.774384395 +0000 UTC Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.312488 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.312558 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:21 crc kubenswrapper[4844]: E0126 12:44:21.312699 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:21 crc kubenswrapper[4844]: E0126 12:44:21.312814 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.312911 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:21 crc kubenswrapper[4844]: E0126 12:44:21.313106 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.325161 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.325213 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.325231 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.325253 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.325270 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:21Z","lastTransitionTime":"2026-01-26T12:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.429127 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.429214 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.429233 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.429297 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.429316 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:21Z","lastTransitionTime":"2026-01-26T12:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.532424 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.532463 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.532472 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.532485 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.532493 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:21Z","lastTransitionTime":"2026-01-26T12:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.635646 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.635707 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.635724 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.635750 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.635767 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:21Z","lastTransitionTime":"2026-01-26T12:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.739363 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.739416 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.739430 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.739450 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.739464 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:21Z","lastTransitionTime":"2026-01-26T12:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.842774 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.842822 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.842834 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.842891 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.842921 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:21Z","lastTransitionTime":"2026-01-26T12:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.945623 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.945660 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.945671 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.945688 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:21 crc kubenswrapper[4844]: I0126 12:44:21.945700 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:21Z","lastTransitionTime":"2026-01-26T12:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.049223 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.049297 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.049325 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.049355 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.049378 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:22Z","lastTransitionTime":"2026-01-26T12:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.152432 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.152488 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.152506 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.152529 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.152546 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:22Z","lastTransitionTime":"2026-01-26T12:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.256053 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.256110 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.256124 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.256146 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.256164 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:22Z","lastTransitionTime":"2026-01-26T12:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.289924 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:27:47.772383811 +0000 UTC Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.339952 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-gxnj7"] Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.341340 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:22 crc kubenswrapper[4844]: E0126 12:44:22.341464 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.361117 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.361168 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.361186 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.361211 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.361228 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:22Z","lastTransitionTime":"2026-01-26T12:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.366997 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.390798 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs\") pod \"network-metrics-daemon-gxnj7\" (UID: \"c69496f6-7f67-4cca-9c9f-420e5567b165\") " pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.390846 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxt6m\" (UniqueName: \"kubernetes.io/projected/c69496f6-7f67-4cca-9c9f-420e5567b165-kube-api-access-jxt6m\") pod \"network-metrics-daemon-gxnj7\" (UID: \"c69496f6-7f67-4cca-9c9f-420e5567b165\") " pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.401274 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd671b046221d3e0fd341253eb58c3ed579fdd50efa41f43f93ced7b9212e792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.416875 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.437314 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.452112 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.464402 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.464423 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.464434 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.464447 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.464456 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:22Z","lastTransitionTime":"2026-01-26T12:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.465278 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.479466 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.492805 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs\") pod \"network-metrics-daemon-gxnj7\" (UID: \"c69496f6-7f67-4cca-9c9f-420e5567b165\") " pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.492875 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxt6m\" (UniqueName: \"kubernetes.io/projected/c69496f6-7f67-4cca-9c9f-420e5567b165-kube-api-access-jxt6m\") pod \"network-metrics-daemon-gxnj7\" (UID: \"c69496f6-7f67-4cca-9c9f-420e5567b165\") " pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:22 crc kubenswrapper[4844]: E0126 12:44:22.493796 4844 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:44:22 crc kubenswrapper[4844]: E0126 12:44:22.494083 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs podName:c69496f6-7f67-4cca-9c9f-420e5567b165 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:22.994048819 +0000 UTC m=+39.927416471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs") pod "network-metrics-daemon-gxnj7" (UID: "c69496f6-7f67-4cca-9c9f-420e5567b165") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.513712 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.522483 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxt6m\" (UniqueName: \"kubernetes.io/projected/c69496f6-7f67-4cca-9c9f-420e5567b165-kube-api-access-jxt6m\") pod \"network-metrics-daemon-gxnj7\" (UID: \"c69496f6-7f67-4cca-9c9f-420e5567b165\") " pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.532878 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.546652 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.548585 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" event={"ID":"04a4b371-44a9-4805-b60f-6f7ba0fac40b","Type":"ContainerStarted","Data":"ca2c650bffe6a20f18ff5fdbe04573f26b8ec0b62a9f9cd5948d0683a49bcf37"} Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.551340 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovnkube-controller/0.log" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.555234 4844 generic.go:334] "Generic (PLEG): container finished" podID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerID="fd671b046221d3e0fd341253eb58c3ed579fdd50efa41f43f93ced7b9212e792" exitCode=1 Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.555273 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerDied","Data":"fd671b046221d3e0fd341253eb58c3ed579fdd50efa41f43f93ced7b9212e792"} Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.556055 4844 scope.go:117] "RemoveContainer" containerID="fd671b046221d3e0fd341253eb58c3ed579fdd50efa41f43f93ced7b9212e792" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.563197 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.566934 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.566970 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.566983 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.567000 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.567013 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:22Z","lastTransitionTime":"2026-01-26T12:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.578027 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.592176 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.607878 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.623310 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.640076 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.658728 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.669448 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.669503 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.669512 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.669531 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.669542 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:22Z","lastTransitionTime":"2026-01-26T12:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.671974 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.685858 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.705018 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd671b046221d3e0fd341253eb58c3ed579fdd50efa41f43f93ced7b9212e792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd671b046221d3e0fd341253eb58c3ed579fdd50efa41f43f93ced7b9212e792\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"0] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 12:44:19.429090 6109 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 12:44:19.429111 6109 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 12:44:19.429152 6109 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 12:44:19.429180 6109 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 12:44:19.429205 6109 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 12:44:19.429222 6109 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 12:44:19.429154 6109 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 12:44:19.429244 6109 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 12:44:19.429263 6109 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 12:44:19.429233 6109 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 12:44:19.429334 6109 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 12:44:19.429365 6109 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 12:44:19.429399 6109 factory.go:656] Stopping watch factory\\\\nI0126 12:44:19.429423 6109 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.720523 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.737577 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.756587 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.772500 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.772553 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.772571 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.772625 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.772643 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:22Z","lastTransitionTime":"2026-01-26T12:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.774960 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.792783 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.830512 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.849158 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.861478 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.876124 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.876190 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.876209 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.876235 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.876252 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:22Z","lastTransitionTime":"2026-01-26T12:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.877950 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.901587 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.921213 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.954740 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.978912 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.979991 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.980042 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.980060 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.980081 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:22 crc kubenswrapper[4844]: I0126 12:44:22.980095 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:22Z","lastTransitionTime":"2026-01-26T12:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.000117 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:22Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.000733 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs\") pod \"network-metrics-daemon-gxnj7\" (UID: \"c69496f6-7f67-4cca-9c9f-420e5567b165\") " pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:23 crc kubenswrapper[4844]: E0126 12:44:23.000968 4844 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:44:23 crc kubenswrapper[4844]: E0126 12:44:23.001066 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs podName:c69496f6-7f67-4cca-9c9f-420e5567b165 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:24.001038333 +0000 UTC m=+40.934406005 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs") pod "network-metrics-daemon-gxnj7" (UID: "c69496f6-7f67-4cca-9c9f-420e5567b165") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.083060 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.083122 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.083131 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.083152 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.083164 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:23Z","lastTransitionTime":"2026-01-26T12:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.186911 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.186989 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.187006 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.187037 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.187062 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:23Z","lastTransitionTime":"2026-01-26T12:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.289161 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.289200 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.289209 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.289222 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.289231 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:23Z","lastTransitionTime":"2026-01-26T12:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.290453 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 23:38:43.826588038 +0000 UTC Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.313077 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.313077 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:23 crc kubenswrapper[4844]: E0126 12:44:23.313252 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:23 crc kubenswrapper[4844]: E0126 12:44:23.313434 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.313098 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:23 crc kubenswrapper[4844]: E0126 12:44:23.313565 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.339090 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.354727 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.363908 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.374537 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.387657 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.390649 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.390685 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.390699 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.390716 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.390727 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:23Z","lastTransitionTime":"2026-01-26T12:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.402480 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.416579 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.428260 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.444775 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.459125 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.473541 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.500998 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.501029 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.501037 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.501054 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.501063 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:23Z","lastTransitionTime":"2026-01-26T12:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.503661 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.520928 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.532837 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.556717 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.583646 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd671b046221d3e0fd341253eb58c3ed579fdd50efa41f43f93ced7b9212e792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd671b046221d3e0fd341253eb58c3ed579fdd50efa41f43f93ced7b9212e792\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"0] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 12:44:19.429090 6109 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 12:44:19.429111 6109 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 12:44:19.429152 6109 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 12:44:19.429180 6109 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 12:44:19.429205 6109 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 12:44:19.429222 6109 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 12:44:19.429154 6109 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 12:44:19.429244 6109 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 12:44:19.429263 6109 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 12:44:19.429233 6109 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 12:44:19.429334 6109 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 12:44:19.429365 6109 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 12:44:19.429399 6109 factory.go:656] Stopping watch factory\\\\nI0126 12:44:19.429423 6109 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.594433 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.603133 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.603179 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.603195 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.603219 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.603236 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:23Z","lastTransitionTime":"2026-01-26T12:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.706099 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.706140 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.706152 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.706168 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.706177 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:23Z","lastTransitionTime":"2026-01-26T12:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.808560 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.808645 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.808667 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.808689 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.808704 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:23Z","lastTransitionTime":"2026-01-26T12:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.910851 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.910886 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.910896 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.910910 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:23 crc kubenswrapper[4844]: I0126 12:44:23.910924 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:23Z","lastTransitionTime":"2026-01-26T12:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.010183 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs\") pod \"network-metrics-daemon-gxnj7\" (UID: \"c69496f6-7f67-4cca-9c9f-420e5567b165\") " pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:24 crc kubenswrapper[4844]: E0126 12:44:24.010466 4844 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:44:24 crc kubenswrapper[4844]: E0126 12:44:24.010570 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs podName:c69496f6-7f67-4cca-9c9f-420e5567b165 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:26.010544803 +0000 UTC m=+42.943912455 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs") pod "network-metrics-daemon-gxnj7" (UID: "c69496f6-7f67-4cca-9c9f-420e5567b165") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.013042 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.013088 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.013099 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.013117 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.013130 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:24Z","lastTransitionTime":"2026-01-26T12:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.115962 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.116095 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.116301 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.116343 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.116369 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:24Z","lastTransitionTime":"2026-01-26T12:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.220824 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.220903 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.220924 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.220960 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.220978 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:24Z","lastTransitionTime":"2026-01-26T12:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.291545 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:52:12.987302043 +0000 UTC Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.313053 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:24 crc kubenswrapper[4844]: E0126 12:44:24.313373 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.324773 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.324830 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.324848 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.324877 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.324896 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:24Z","lastTransitionTime":"2026-01-26T12:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.429269 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.429428 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.429449 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.429476 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.429499 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:24Z","lastTransitionTime":"2026-01-26T12:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.532160 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.532238 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.532258 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.532289 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.532309 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:24Z","lastTransitionTime":"2026-01-26T12:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.565702 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" event={"ID":"04a4b371-44a9-4805-b60f-6f7ba0fac40b","Type":"ContainerStarted","Data":"916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e"} Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.636363 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.636429 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.636445 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.636467 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.636486 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:24Z","lastTransitionTime":"2026-01-26T12:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.739497 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.739540 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.739564 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.739585 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.739637 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:24Z","lastTransitionTime":"2026-01-26T12:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.842130 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.842177 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.842192 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.842212 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.842224 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:24Z","lastTransitionTime":"2026-01-26T12:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.945411 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.945469 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.945490 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.945512 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:24 crc kubenswrapper[4844]: I0126 12:44:24.945528 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:24Z","lastTransitionTime":"2026-01-26T12:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.048689 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.048755 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.048780 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.048809 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.048830 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:25Z","lastTransitionTime":"2026-01-26T12:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.152149 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.152229 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.152238 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.152260 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.152280 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:25Z","lastTransitionTime":"2026-01-26T12:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.222209 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.222268 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.222285 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.222310 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.222328 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:25Z","lastTransitionTime":"2026-01-26T12:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:25 crc kubenswrapper[4844]: E0126 12:44:25.245942 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.250471 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.250552 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.250571 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.250692 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.250739 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:25Z","lastTransitionTime":"2026-01-26T12:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:25 crc kubenswrapper[4844]: E0126 12:44:25.264904 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.269489 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.269553 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.269571 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.269627 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.269646 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:25Z","lastTransitionTime":"2026-01-26T12:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.292779 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 09:59:05.006532465 +0000 UTC Jan 26 12:44:25 crc kubenswrapper[4844]: E0126 12:44:25.293744 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.305299 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.305365 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.305385 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.305411 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.305432 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:25Z","lastTransitionTime":"2026-01-26T12:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.312118 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:25 crc kubenswrapper[4844]: E0126 12:44:25.312233 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.312306 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.312342 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:25 crc kubenswrapper[4844]: E0126 12:44:25.312425 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:25 crc kubenswrapper[4844]: E0126 12:44:25.312805 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:25 crc kubenswrapper[4844]: E0126 12:44:25.330878 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.336119 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.336195 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.336214 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.336240 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.336258 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:25Z","lastTransitionTime":"2026-01-26T12:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:25 crc kubenswrapper[4844]: E0126 12:44:25.353543 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: E0126 12:44:25.353816 4844 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.356062 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.356116 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.356130 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.356150 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.356164 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:25Z","lastTransitionTime":"2026-01-26T12:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.459362 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.459405 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.459417 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.459433 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.459443 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:25Z","lastTransitionTime":"2026-01-26T12:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.562007 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.562071 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.562089 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.562112 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.562130 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:25Z","lastTransitionTime":"2026-01-26T12:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.578479 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovnkube-controller/0.log" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.583004 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerStarted","Data":"d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84"} Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.583138 4844 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.600322 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.618438 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.631143 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.648125 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.659374 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.664461 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.664486 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.664497 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.664512 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.664524 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:25Z","lastTransitionTime":"2026-01-26T12:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.669569 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.678690 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.692463 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.713380 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd671b046221d3e0fd341253eb58c3ed579fdd50efa41f43f93ced7b9212e792\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"0] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 12:44:19.429090 6109 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 12:44:19.429111 6109 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 12:44:19.429152 6109 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 12:44:19.429180 6109 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 12:44:19.429205 6109 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 12:44:19.429222 6109 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 12:44:19.429154 6109 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 12:44:19.429244 6109 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 12:44:19.429263 6109 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 12:44:19.429233 6109 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 12:44:19.429334 6109 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 12:44:19.429365 6109 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 12:44:19.429399 6109 factory.go:656] Stopping watch factory\\\\nI0126 12:44:19.429423 6109 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.730775 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.744204 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.755938 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.767116 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.767165 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.767176 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.767198 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.767213 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:25Z","lastTransitionTime":"2026-01-26T12:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.769579 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.788714 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.811773 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.823500 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.833464 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:25Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.870228 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.870278 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.870289 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.870310 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.870322 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:25Z","lastTransitionTime":"2026-01-26T12:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.972520 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.972821 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.972894 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.972971 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:25 crc kubenswrapper[4844]: I0126 12:44:25.973030 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:25Z","lastTransitionTime":"2026-01-26T12:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.036344 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs\") pod \"network-metrics-daemon-gxnj7\" (UID: \"c69496f6-7f67-4cca-9c9f-420e5567b165\") " pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:26 crc kubenswrapper[4844]: E0126 12:44:26.036467 4844 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:44:26 crc kubenswrapper[4844]: E0126 12:44:26.036519 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs podName:c69496f6-7f67-4cca-9c9f-420e5567b165 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:30.03650542 +0000 UTC m=+46.969873022 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs") pod "network-metrics-daemon-gxnj7" (UID: "c69496f6-7f67-4cca-9c9f-420e5567b165") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.075174 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.075227 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.075237 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.075251 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.075262 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:26Z","lastTransitionTime":"2026-01-26T12:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.177441 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.177544 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.177567 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.177591 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.177637 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:26Z","lastTransitionTime":"2026-01-26T12:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.280530 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.280567 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.280576 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.280591 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.280621 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:26Z","lastTransitionTime":"2026-01-26T12:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.293104 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 22:50:55.879475492 +0000 UTC Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.312689 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:26 crc kubenswrapper[4844]: E0126 12:44:26.312900 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.382487 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.382531 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.382543 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.382558 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.382568 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:26Z","lastTransitionTime":"2026-01-26T12:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.485734 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.485813 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.485833 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.485857 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.485878 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:26Z","lastTransitionTime":"2026-01-26T12:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.588442 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.588528 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.588583 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.588658 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.588681 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:26Z","lastTransitionTime":"2026-01-26T12:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.589977 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" event={"ID":"04a4b371-44a9-4805-b60f-6f7ba0fac40b","Type":"ContainerStarted","Data":"1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d"} Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.609277 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.626362 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.642732 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.656458 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.673132 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.691097 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.691137 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.691149 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.691168 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.691180 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:26Z","lastTransitionTime":"2026-01-26T12:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.695611 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd671b046221d3e0fd341253eb58c3ed579fdd50efa41f43f93ced7b9212e792\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"0] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 12:44:19.429090 6109 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 12:44:19.429111 6109 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 12:44:19.429152 6109 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 12:44:19.429180 6109 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 12:44:19.429205 6109 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 12:44:19.429222 6109 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 12:44:19.429154 6109 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 12:44:19.429244 6109 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 12:44:19.429263 6109 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 12:44:19.429233 6109 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 12:44:19.429334 6109 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 12:44:19.429365 6109 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 12:44:19.429399 6109 factory.go:656] Stopping watch factory\\\\nI0126 12:44:19.429423 6109 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.711051 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.744684 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.762534 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.778234 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.793580 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.793763 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.793793 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.793868 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.793903 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:26Z","lastTransitionTime":"2026-01-26T12:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.796689 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.814892 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.832175 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.852007 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.868522 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.888187 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.896110 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.896150 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.896162 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.896180 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.896194 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:26Z","lastTransitionTime":"2026-01-26T12:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.908897 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:26Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.998998 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.999049 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.999064 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.999083 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:26 crc kubenswrapper[4844]: I0126 12:44:26.999096 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:26Z","lastTransitionTime":"2026-01-26T12:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.102273 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.102344 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.102367 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.102400 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.102425 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:27Z","lastTransitionTime":"2026-01-26T12:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.205790 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.205891 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.205909 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.205934 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.205950 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:27Z","lastTransitionTime":"2026-01-26T12:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.293699 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 21:02:49.939352703 +0000 UTC Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.308649 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.308696 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.308712 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.308735 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.308754 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:27Z","lastTransitionTime":"2026-01-26T12:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.312161 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:27 crc kubenswrapper[4844]: E0126 12:44:27.312333 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.312582 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:27 crc kubenswrapper[4844]: E0126 12:44:27.312722 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.313298 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:27 crc kubenswrapper[4844]: E0126 12:44:27.313419 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.412650 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.412713 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.412738 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.412768 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.412792 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:27Z","lastTransitionTime":"2026-01-26T12:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.516423 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.516488 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.516508 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.516526 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.516538 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:27Z","lastTransitionTime":"2026-01-26T12:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.595018 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovnkube-controller/1.log" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.595841 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovnkube-controller/0.log" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.598688 4844 generic.go:334] "Generic (PLEG): container finished" podID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerID="d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84" exitCode=1 Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.598731 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerDied","Data":"d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84"} Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.598840 4844 scope.go:117] "RemoveContainer" containerID="fd671b046221d3e0fd341253eb58c3ed579fdd50efa41f43f93ced7b9212e792" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.604808 4844 scope.go:117] "RemoveContainer" containerID="d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84" Jan 26 12:44:27 crc kubenswrapper[4844]: E0126 12:44:27.605220 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.614753 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:27Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.618832 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.618863 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.618873 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.618889 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.618900 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:27Z","lastTransitionTime":"2026-01-26T12:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.629648 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:27Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.643895 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:27Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.667395 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd671b046221d3e0fd341253eb58c3ed579fdd50efa41f43f93ced7b9212e792\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"0] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 12:44:19.429090 6109 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 12:44:19.429111 6109 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 12:44:19.429152 6109 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 12:44:19.429180 6109 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 12:44:19.429205 6109 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 12:44:19.429222 6109 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 12:44:19.429154 6109 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 12:44:19.429244 6109 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 12:44:19.429263 6109 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 12:44:19.429233 6109 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 12:44:19.429334 6109 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 12:44:19.429365 6109 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 12:44:19.429399 6109 factory.go:656] Stopping watch factory\\\\nI0126 12:44:19.429423 6109 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"message\\\":\\\"ft-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0126 12:44:26.471001 6272 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:26.471052 6272 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0126 12:44:26.471060 6272 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:44:26.471100 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:27Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.678301 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:27Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.699890 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:27Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.713989 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:27Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.723520 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.723570 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.723585 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.723619 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.723632 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:27Z","lastTransitionTime":"2026-01-26T12:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.728738 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:27Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.748254 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:27Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.765872 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:27Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.781247 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:27Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.795515 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:27Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.809938 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:27Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.824731 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:27Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.826465 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.826577 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.826666 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.826767 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.826838 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:27Z","lastTransitionTime":"2026-01-26T12:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.838186 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:27Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.850927 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:27Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.863260 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:27Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.929474 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.929524 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.929534 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.929549 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.929562 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:27Z","lastTransitionTime":"2026-01-26T12:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:27 crc kubenswrapper[4844]: I0126 12:44:27.964124 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.032168 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.032250 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.032274 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.032308 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.032331 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:28Z","lastTransitionTime":"2026-01-26T12:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.134407 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.134460 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.134471 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.134488 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.134498 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:28Z","lastTransitionTime":"2026-01-26T12:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.237320 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.237391 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.237406 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.237428 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.237443 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:28Z","lastTransitionTime":"2026-01-26T12:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.294173 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:47:09.082903957 +0000 UTC Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.312811 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:28 crc kubenswrapper[4844]: E0126 12:44:28.313013 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.341063 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.341106 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.341114 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.341131 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.341141 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:28Z","lastTransitionTime":"2026-01-26T12:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.444405 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.444441 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.444453 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.444469 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.444481 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:28Z","lastTransitionTime":"2026-01-26T12:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.548449 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.548498 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.548511 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.548527 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.548542 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:28Z","lastTransitionTime":"2026-01-26T12:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.605270 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovnkube-controller/1.log" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.609790 4844 scope.go:117] "RemoveContainer" containerID="d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84" Jan 26 12:44:28 crc kubenswrapper[4844]: E0126 12:44:28.609943 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.625169 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.639201 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.653628 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.653679 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.653696 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.653718 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.653732 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:28Z","lastTransitionTime":"2026-01-26T12:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.653946 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.667066 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.692077 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.708932 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.729855 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.743793 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.756040 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.756937 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.757002 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.757013 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.757035 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.757051 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:28Z","lastTransitionTime":"2026-01-26T12:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.770129 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.782494 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.794394 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.808489 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.821257 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.844682 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.859983 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.860029 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.860042 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.860063 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.860078 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:28Z","lastTransitionTime":"2026-01-26T12:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.864653 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"message\\\":\\\"ft-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0126 12:44:26.471001 6272 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:26.471052 6272 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0126 12:44:26.471060 6272 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:44:26.471100 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.877770 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:28Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.962607 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.962649 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.962660 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.962673 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:28 crc kubenswrapper[4844]: I0126 12:44:28.962684 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:28Z","lastTransitionTime":"2026-01-26T12:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.065572 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.065656 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.065668 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.065688 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.065702 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:29Z","lastTransitionTime":"2026-01-26T12:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.168967 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.169005 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.169014 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.169026 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.169035 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:29Z","lastTransitionTime":"2026-01-26T12:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.272169 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.272257 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.272283 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.272314 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.272338 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:29Z","lastTransitionTime":"2026-01-26T12:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.294792 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 02:01:39.722687332 +0000 UTC Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.312143 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:29 crc kubenswrapper[4844]: E0126 12:44:29.312255 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.312377 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.312424 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:29 crc kubenswrapper[4844]: E0126 12:44:29.312575 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:29 crc kubenswrapper[4844]: E0126 12:44:29.312678 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.376032 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.376070 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.376078 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.376110 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.376122 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:29Z","lastTransitionTime":"2026-01-26T12:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.479639 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.479674 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.479686 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.479704 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.479716 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:29Z","lastTransitionTime":"2026-01-26T12:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.582563 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.582681 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.582700 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.582720 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.582733 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:29Z","lastTransitionTime":"2026-01-26T12:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.685984 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.686025 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.686035 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.686050 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.686060 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:29Z","lastTransitionTime":"2026-01-26T12:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.788920 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.788951 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.788959 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.788971 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.788981 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:29Z","lastTransitionTime":"2026-01-26T12:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.892120 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.892158 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.892167 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.892180 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.892189 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:29Z","lastTransitionTime":"2026-01-26T12:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.994426 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.994462 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.994473 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.994488 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:29 crc kubenswrapper[4844]: I0126 12:44:29.994499 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:29Z","lastTransitionTime":"2026-01-26T12:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.076674 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs\") pod \"network-metrics-daemon-gxnj7\" (UID: \"c69496f6-7f67-4cca-9c9f-420e5567b165\") " pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:30 crc kubenswrapper[4844]: E0126 12:44:30.076842 4844 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:44:30 crc kubenswrapper[4844]: E0126 12:44:30.076914 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs podName:c69496f6-7f67-4cca-9c9f-420e5567b165 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:38.076894166 +0000 UTC m=+55.010261838 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs") pod "network-metrics-daemon-gxnj7" (UID: "c69496f6-7f67-4cca-9c9f-420e5567b165") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.097391 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.097453 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.097473 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.097498 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.097517 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:30Z","lastTransitionTime":"2026-01-26T12:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.200825 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.200961 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.200980 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.201002 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.201021 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:30Z","lastTransitionTime":"2026-01-26T12:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.295330 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 10:00:16.575711181 +0000 UTC Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.303920 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.304004 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.304054 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.304079 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.304097 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:30Z","lastTransitionTime":"2026-01-26T12:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.312216 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:30 crc kubenswrapper[4844]: E0126 12:44:30.312328 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.407279 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.407347 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.407364 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.407382 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.407396 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:30Z","lastTransitionTime":"2026-01-26T12:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.510021 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.510069 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.510081 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.510096 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.510109 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:30Z","lastTransitionTime":"2026-01-26T12:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.614015 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.614052 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.614062 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.614078 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.614088 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:30Z","lastTransitionTime":"2026-01-26T12:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.717499 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.717585 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.717635 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.717662 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.717680 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:30Z","lastTransitionTime":"2026-01-26T12:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.820420 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.820493 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.820517 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.820548 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.820572 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:30Z","lastTransitionTime":"2026-01-26T12:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.924488 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.924564 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.924579 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.924629 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:30 crc kubenswrapper[4844]: I0126 12:44:30.924648 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:30Z","lastTransitionTime":"2026-01-26T12:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.012971 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.022238 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.027434 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.027482 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.027493 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.027510 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.027522 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:31Z","lastTransitionTime":"2026-01-26T12:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.029484 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.041505 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.052294 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.064845 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.076856 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.091973 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.104240 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.114671 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.123819 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.129328 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.129366 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.129378 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.129392 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.129404 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:31Z","lastTransitionTime":"2026-01-26T12:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.135618 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.150741 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"message\\\":\\\"ft-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0126 12:44:26.471001 6272 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:26.471052 6272 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0126 12:44:26.471060 6272 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:44:26.471100 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.160684 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.170573 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.179710 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.189911 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.198435 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.214814 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:31Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.231389 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.231426 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.231436 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.231451 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.231459 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:31Z","lastTransitionTime":"2026-01-26T12:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.295581 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:33:59.407763025 +0000 UTC Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.312130 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.312170 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.312208 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:31 crc kubenswrapper[4844]: E0126 12:44:31.312348 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:31 crc kubenswrapper[4844]: E0126 12:44:31.312435 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:31 crc kubenswrapper[4844]: E0126 12:44:31.312633 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.334519 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.334565 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.334579 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.334609 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.334620 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:31Z","lastTransitionTime":"2026-01-26T12:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.437245 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.437296 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.437309 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.437329 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.437342 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:31Z","lastTransitionTime":"2026-01-26T12:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.540999 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.541182 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.541193 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.541225 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.541237 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:31Z","lastTransitionTime":"2026-01-26T12:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.643910 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.644209 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.644218 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.644233 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.644242 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:31Z","lastTransitionTime":"2026-01-26T12:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.747655 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.747710 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.747719 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.747736 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.747780 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:31Z","lastTransitionTime":"2026-01-26T12:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.850854 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.850914 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.850933 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.850955 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.850971 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:31Z","lastTransitionTime":"2026-01-26T12:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.953942 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.954004 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.954021 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.954046 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:31 crc kubenswrapper[4844]: I0126 12:44:31.954064 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:31Z","lastTransitionTime":"2026-01-26T12:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.056436 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.056473 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.056483 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.056499 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.056510 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:32Z","lastTransitionTime":"2026-01-26T12:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.158700 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.158735 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.158745 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.158773 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.158783 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:32Z","lastTransitionTime":"2026-01-26T12:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.261516 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.261564 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.261579 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.261617 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.261628 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:32Z","lastTransitionTime":"2026-01-26T12:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.295856 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:47:03.169682621 +0000 UTC Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.312544 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:32 crc kubenswrapper[4844]: E0126 12:44:32.312727 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.364683 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.364727 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.364738 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.364755 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.364768 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:32Z","lastTransitionTime":"2026-01-26T12:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.467348 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.467398 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.467414 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.467430 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.467442 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:32Z","lastTransitionTime":"2026-01-26T12:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.571674 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.571738 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.571751 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.571777 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.571793 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:32Z","lastTransitionTime":"2026-01-26T12:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.673970 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.674019 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.674038 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.674056 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.674067 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:32Z","lastTransitionTime":"2026-01-26T12:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.777506 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.777557 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.777569 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.777586 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.777610 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:32Z","lastTransitionTime":"2026-01-26T12:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.881868 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.881939 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.881955 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.881979 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.881995 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:32Z","lastTransitionTime":"2026-01-26T12:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.984881 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.984943 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.984952 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.984974 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:32 crc kubenswrapper[4844]: I0126 12:44:32.984985 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:32Z","lastTransitionTime":"2026-01-26T12:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.087704 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.087761 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.087777 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.087795 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.087808 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:33Z","lastTransitionTime":"2026-01-26T12:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.191419 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.191487 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.191506 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.191533 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.191554 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:33Z","lastTransitionTime":"2026-01-26T12:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.214728 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.214961 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:33 crc kubenswrapper[4844]: E0126 12:44:33.215046 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:45:05.214956067 +0000 UTC m=+82.148323719 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:44:33 crc kubenswrapper[4844]: E0126 12:44:33.215072 4844 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:44:33 crc kubenswrapper[4844]: E0126 12:44:33.215152 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:45:05.215130861 +0000 UTC m=+82.148498473 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.215148 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:33 crc kubenswrapper[4844]: E0126 12:44:33.215354 4844 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:44:33 crc kubenswrapper[4844]: E0126 12:44:33.215483 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:45:05.215459289 +0000 UTC m=+82.148827111 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.294173 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.294256 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.294270 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.294290 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.294304 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:33Z","lastTransitionTime":"2026-01-26T12:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.296394 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:06:26.449021997 +0000 UTC Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.314215 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:33 crc kubenswrapper[4844]: E0126 12:44:33.314359 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.314551 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.315033 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:33 crc kubenswrapper[4844]: E0126 12:44:33.315173 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:33 crc kubenswrapper[4844]: E0126 12:44:33.315025 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.315839 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.315885 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:33 crc kubenswrapper[4844]: E0126 12:44:33.315990 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:44:33 crc kubenswrapper[4844]: E0126 12:44:33.316007 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:44:33 crc kubenswrapper[4844]: E0126 12:44:33.316016 4844 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:33 crc kubenswrapper[4844]: E0126 12:44:33.316058 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 12:45:05.316045694 +0000 UTC m=+82.249413306 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:33 crc kubenswrapper[4844]: E0126 12:44:33.316103 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:44:33 crc kubenswrapper[4844]: E0126 12:44:33.316154 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:44:33 crc kubenswrapper[4844]: E0126 12:44:33.316170 4844 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:33 crc kubenswrapper[4844]: E0126 12:44:33.316240 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 12:45:05.316215499 +0000 UTC m=+82.249583111 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.330560 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.357165 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.376112 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.390058 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.395922 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.395998 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.396013 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.396031 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.396044 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:33Z","lastTransitionTime":"2026-01-26T12:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.403789 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.418933 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.431927 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.448671 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.465246 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.477684 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"012dd78d-465b-41aa-b845-5cd178650e56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a80313396de8bb91760bdf2477da9d233e2387d1ac6addcce62acc4578772c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11638189cf49baf0798a3c7a229b67e05eedf2292d79f884a990a091f21a61c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb28b05d43134d8c4f89d83cd620973c937fd16347910ebf056026f0a3708a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.495685 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.498708 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.498750 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.498761 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.498780 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.498792 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:33Z","lastTransitionTime":"2026-01-26T12:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.509762 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.522813 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.552062 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"message\\\":\\\"ft-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0126 12:44:26.471001 6272 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:26.471052 6272 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0126 12:44:26.471060 6272 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:44:26.471100 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.565642 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.580352 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.593938 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.601848 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.601896 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.601909 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.601930 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.601942 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:33Z","lastTransitionTime":"2026-01-26T12:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.612710 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:33Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.704727 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.704780 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.704789 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.704808 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.704819 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:33Z","lastTransitionTime":"2026-01-26T12:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.807880 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.807932 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.807943 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.807960 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.807971 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:33Z","lastTransitionTime":"2026-01-26T12:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.910904 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.910978 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.910997 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.911023 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:33 crc kubenswrapper[4844]: I0126 12:44:33.911042 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:33Z","lastTransitionTime":"2026-01-26T12:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.014825 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.014888 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.014901 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.014923 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.014936 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:34Z","lastTransitionTime":"2026-01-26T12:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.118486 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.118550 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.118652 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.118701 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.118731 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:34Z","lastTransitionTime":"2026-01-26T12:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.221855 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.221939 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.221960 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.221992 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.222016 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:34Z","lastTransitionTime":"2026-01-26T12:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.296628 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:21:33.607808365 +0000 UTC Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.312259 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:34 crc kubenswrapper[4844]: E0126 12:44:34.312437 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.325856 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.325924 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.325937 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.325955 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.325969 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:34Z","lastTransitionTime":"2026-01-26T12:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.429101 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.429162 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.429177 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.429200 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.429218 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:34Z","lastTransitionTime":"2026-01-26T12:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.532765 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.532821 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.532843 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.532874 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.532897 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:34Z","lastTransitionTime":"2026-01-26T12:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.635920 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.635986 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.635998 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.636016 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.636026 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:34Z","lastTransitionTime":"2026-01-26T12:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.739799 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.739896 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.739921 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.739958 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.739990 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:34Z","lastTransitionTime":"2026-01-26T12:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.844250 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.844292 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.844302 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.844341 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.844351 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:34Z","lastTransitionTime":"2026-01-26T12:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.948635 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.948722 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.948738 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.948761 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:34 crc kubenswrapper[4844]: I0126 12:44:34.948776 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:34Z","lastTransitionTime":"2026-01-26T12:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.051928 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.051974 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.051990 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.052011 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.052026 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:35Z","lastTransitionTime":"2026-01-26T12:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.155358 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.155410 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.155425 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.155444 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.155459 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:35Z","lastTransitionTime":"2026-01-26T12:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.259218 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.259275 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.259292 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.259316 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.259336 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:35Z","lastTransitionTime":"2026-01-26T12:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.297779 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:35:33.541972782 +0000 UTC Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.312331 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.312422 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:35 crc kubenswrapper[4844]: E0126 12:44:35.312578 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.313019 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:35 crc kubenswrapper[4844]: E0126 12:44:35.313126 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:35 crc kubenswrapper[4844]: E0126 12:44:35.313179 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.361804 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.362128 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.362158 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.362187 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.362208 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:35Z","lastTransitionTime":"2026-01-26T12:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.465146 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.465213 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.465228 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.465252 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.465265 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:35Z","lastTransitionTime":"2026-01-26T12:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.568032 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.568122 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.568136 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.568165 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.568184 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:35Z","lastTransitionTime":"2026-01-26T12:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.671168 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.671230 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.671252 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.671278 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.671296 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:35Z","lastTransitionTime":"2026-01-26T12:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.699196 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.699259 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.699277 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.699303 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.699321 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:35Z","lastTransitionTime":"2026-01-26T12:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:35 crc kubenswrapper[4844]: E0126 12:44:35.717408 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.722550 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.722670 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.722691 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.722716 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.722734 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:35Z","lastTransitionTime":"2026-01-26T12:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:35 crc kubenswrapper[4844]: E0126 12:44:35.737882 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.743157 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.743216 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.743230 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.743250 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.743264 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:35Z","lastTransitionTime":"2026-01-26T12:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:35 crc kubenswrapper[4844]: E0126 12:44:35.762697 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.767273 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.767331 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.767344 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.767364 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.767375 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:35Z","lastTransitionTime":"2026-01-26T12:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:35 crc kubenswrapper[4844]: E0126 12:44:35.784086 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.787448 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.787494 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.787504 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.787527 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.787541 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:35Z","lastTransitionTime":"2026-01-26T12:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:35 crc kubenswrapper[4844]: E0126 12:44:35.799644 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:35Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:35 crc kubenswrapper[4844]: E0126 12:44:35.799789 4844 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.801672 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.801711 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.801721 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.801735 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.801750 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:35Z","lastTransitionTime":"2026-01-26T12:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.905467 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.905559 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.905626 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.905671 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:35 crc kubenswrapper[4844]: I0126 12:44:35.905697 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:35Z","lastTransitionTime":"2026-01-26T12:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.009625 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.009942 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.010043 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.010150 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.010259 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:36Z","lastTransitionTime":"2026-01-26T12:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.113527 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.113592 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.113636 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.113656 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.113669 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:36Z","lastTransitionTime":"2026-01-26T12:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.216160 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.216208 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.216224 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.216246 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.216262 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:36Z","lastTransitionTime":"2026-01-26T12:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.298859 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 03:56:00.817459948 +0000 UTC Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.312149 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:36 crc kubenswrapper[4844]: E0126 12:44:36.312372 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.320122 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.320302 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.320335 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.320361 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.320379 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:36Z","lastTransitionTime":"2026-01-26T12:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.422840 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.422889 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.422903 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.422925 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.422939 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:36Z","lastTransitionTime":"2026-01-26T12:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.526136 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.526301 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.526331 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.526361 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.526383 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:36Z","lastTransitionTime":"2026-01-26T12:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.629837 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.630073 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.630097 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.630124 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.630145 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:36Z","lastTransitionTime":"2026-01-26T12:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.733590 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.733678 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.733694 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.733717 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.733736 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:36Z","lastTransitionTime":"2026-01-26T12:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.836919 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.836978 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.836998 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.837022 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.837039 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:36Z","lastTransitionTime":"2026-01-26T12:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.939889 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.939970 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.939995 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.940027 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:36 crc kubenswrapper[4844]: I0126 12:44:36.940053 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:36Z","lastTransitionTime":"2026-01-26T12:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.042747 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.042813 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.042832 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.042857 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.042874 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:37Z","lastTransitionTime":"2026-01-26T12:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.145667 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.145717 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.145729 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.145747 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.145759 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:37Z","lastTransitionTime":"2026-01-26T12:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.248426 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.248482 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.248493 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.248509 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.248521 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:37Z","lastTransitionTime":"2026-01-26T12:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.299691 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 02:41:03.555018062 +0000 UTC Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.313105 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.313171 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:37 crc kubenswrapper[4844]: E0126 12:44:37.313255 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.313314 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:37 crc kubenswrapper[4844]: E0126 12:44:37.313380 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:37 crc kubenswrapper[4844]: E0126 12:44:37.313460 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.350825 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.350864 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.350873 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.350886 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.350895 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:37Z","lastTransitionTime":"2026-01-26T12:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.453951 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.454003 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.454016 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.454035 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.454048 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:37Z","lastTransitionTime":"2026-01-26T12:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.556770 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.556827 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.556835 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.556847 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.556862 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:37Z","lastTransitionTime":"2026-01-26T12:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.659510 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.659679 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.659740 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.659765 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.659782 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:37Z","lastTransitionTime":"2026-01-26T12:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.762712 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.762750 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.762761 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.762799 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.762810 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:37Z","lastTransitionTime":"2026-01-26T12:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.866508 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.866656 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.866696 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.866742 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.866767 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:37Z","lastTransitionTime":"2026-01-26T12:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.970415 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.970563 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.970586 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.970641 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:37 crc kubenswrapper[4844]: I0126 12:44:37.970660 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:37Z","lastTransitionTime":"2026-01-26T12:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.073135 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.073170 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.073178 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.073193 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.073202 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:38Z","lastTransitionTime":"2026-01-26T12:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.173300 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs\") pod \"network-metrics-daemon-gxnj7\" (UID: \"c69496f6-7f67-4cca-9c9f-420e5567b165\") " pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:38 crc kubenswrapper[4844]: E0126 12:44:38.173477 4844 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:44:38 crc kubenswrapper[4844]: E0126 12:44:38.173637 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs podName:c69496f6-7f67-4cca-9c9f-420e5567b165 nodeName:}" failed. No retries permitted until 2026-01-26 12:44:54.173577081 +0000 UTC m=+71.106944733 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs") pod "network-metrics-daemon-gxnj7" (UID: "c69496f6-7f67-4cca-9c9f-420e5567b165") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.176051 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.176198 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.176259 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.176284 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.176302 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:38Z","lastTransitionTime":"2026-01-26T12:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.282224 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.282301 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.282327 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.282359 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.282383 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:38Z","lastTransitionTime":"2026-01-26T12:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.300876 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 12:32:25.52302656 +0000 UTC Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.312480 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:38 crc kubenswrapper[4844]: E0126 12:44:38.312800 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.393850 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.393934 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.393961 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.393996 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.394022 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:38Z","lastTransitionTime":"2026-01-26T12:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.497741 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.497821 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.497843 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.497871 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.497892 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:38Z","lastTransitionTime":"2026-01-26T12:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.601386 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.601471 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.601493 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.601524 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.601546 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:38Z","lastTransitionTime":"2026-01-26T12:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.704667 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.704748 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.704783 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.704816 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.704837 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:38Z","lastTransitionTime":"2026-01-26T12:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.806850 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.806905 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.806916 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.806934 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.806947 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:38Z","lastTransitionTime":"2026-01-26T12:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.909628 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.909675 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.909688 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.909704 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:38 crc kubenswrapper[4844]: I0126 12:44:38.909715 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:38Z","lastTransitionTime":"2026-01-26T12:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.013042 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.013130 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.013150 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.013175 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.013192 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:39Z","lastTransitionTime":"2026-01-26T12:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.115636 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.115704 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.115722 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.115744 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.115760 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:39Z","lastTransitionTime":"2026-01-26T12:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.219117 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.219201 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.219226 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.219262 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.219289 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:39Z","lastTransitionTime":"2026-01-26T12:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.301227 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 10:38:46.217374027 +0000 UTC Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.312646 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.312646 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:39 crc kubenswrapper[4844]: E0126 12:44:39.312899 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:39 crc kubenswrapper[4844]: E0126 12:44:39.312790 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.312659 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:39 crc kubenswrapper[4844]: E0126 12:44:39.313020 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.320860 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.320907 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.320924 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.320945 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.320962 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:39Z","lastTransitionTime":"2026-01-26T12:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.423387 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.423419 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.423428 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.423442 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.423452 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:39Z","lastTransitionTime":"2026-01-26T12:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.526368 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.526427 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.526444 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.526470 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.526487 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:39Z","lastTransitionTime":"2026-01-26T12:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.629450 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.629510 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.629526 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.629546 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.629562 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:39Z","lastTransitionTime":"2026-01-26T12:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.732040 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.732095 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.732114 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.732135 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.732152 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:39Z","lastTransitionTime":"2026-01-26T12:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.835233 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.835296 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.835308 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.835327 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.835340 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:39Z","lastTransitionTime":"2026-01-26T12:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.937805 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.937857 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.937871 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.937896 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:39 crc kubenswrapper[4844]: I0126 12:44:39.937909 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:39Z","lastTransitionTime":"2026-01-26T12:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.040817 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.040933 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.040963 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.040997 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.041022 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:40Z","lastTransitionTime":"2026-01-26T12:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.144865 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.144943 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.144998 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.145023 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.145044 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:40Z","lastTransitionTime":"2026-01-26T12:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.248104 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.248161 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.248180 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.248203 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.248220 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:40Z","lastTransitionTime":"2026-01-26T12:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.302082 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 22:41:14.728353648 +0000 UTC Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.312398 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:40 crc kubenswrapper[4844]: E0126 12:44:40.312551 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.350350 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.350424 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.350435 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.350450 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.350462 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:40Z","lastTransitionTime":"2026-01-26T12:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.453524 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.453570 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.453582 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.453614 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.453627 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:40Z","lastTransitionTime":"2026-01-26T12:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.556109 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.556175 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.556186 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.556199 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.556231 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:40Z","lastTransitionTime":"2026-01-26T12:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.658687 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.658776 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.658795 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.658813 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.658827 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:40Z","lastTransitionTime":"2026-01-26T12:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.760934 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.761023 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.761047 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.761077 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.761099 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:40Z","lastTransitionTime":"2026-01-26T12:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.862967 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.863000 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.863008 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.863020 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.863033 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:40Z","lastTransitionTime":"2026-01-26T12:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.965881 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.965923 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.965946 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.965962 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:40 crc kubenswrapper[4844]: I0126 12:44:40.965973 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:40Z","lastTransitionTime":"2026-01-26T12:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.068769 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.068814 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.068824 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.068841 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.068853 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:41Z","lastTransitionTime":"2026-01-26T12:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.172075 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.172145 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.172172 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.172220 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.172243 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:41Z","lastTransitionTime":"2026-01-26T12:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.275259 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.275299 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.275309 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.275324 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.275335 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:41Z","lastTransitionTime":"2026-01-26T12:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.302681 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 09:41:02.435763105 +0000 UTC Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.312683 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.312781 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:41 crc kubenswrapper[4844]: E0126 12:44:41.312843 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.312869 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:41 crc kubenswrapper[4844]: E0126 12:44:41.312985 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:41 crc kubenswrapper[4844]: E0126 12:44:41.313078 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.377360 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.377392 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.377402 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.377415 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.377434 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:41Z","lastTransitionTime":"2026-01-26T12:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.479173 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.479200 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.479208 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.479221 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.479229 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:41Z","lastTransitionTime":"2026-01-26T12:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.581984 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.582067 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.582090 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.582118 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.582137 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:41Z","lastTransitionTime":"2026-01-26T12:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.684533 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.684584 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.684635 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.684653 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.684666 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:41Z","lastTransitionTime":"2026-01-26T12:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.786333 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.786368 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.786379 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.786394 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.786405 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:41Z","lastTransitionTime":"2026-01-26T12:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.888841 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.888883 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.888893 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.888907 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.888917 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:41Z","lastTransitionTime":"2026-01-26T12:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.991458 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.991505 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.991518 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.991537 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:41 crc kubenswrapper[4844]: I0126 12:44:41.991548 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:41Z","lastTransitionTime":"2026-01-26T12:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.094925 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.094965 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.094972 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.094985 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.094994 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:42Z","lastTransitionTime":"2026-01-26T12:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.197419 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.197461 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.197470 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.197482 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.197492 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:42Z","lastTransitionTime":"2026-01-26T12:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.299399 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.299423 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.299433 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.299447 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.299457 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:42Z","lastTransitionTime":"2026-01-26T12:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.302807 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 20:13:00.882595263 +0000 UTC Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.312088 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:42 crc kubenswrapper[4844]: E0126 12:44:42.312258 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.313205 4844 scope.go:117] "RemoveContainer" containerID="d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.402573 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.402647 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.402663 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.402681 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.402692 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:42Z","lastTransitionTime":"2026-01-26T12:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.506912 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.507385 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.507402 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.507429 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.507449 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:42Z","lastTransitionTime":"2026-01-26T12:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.609933 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.610016 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.610039 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.610411 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.610431 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:42Z","lastTransitionTime":"2026-01-26T12:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.713358 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.713433 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.713457 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.713488 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.713511 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:42Z","lastTransitionTime":"2026-01-26T12:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.816424 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.816519 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.816543 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.816566 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.816583 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:42Z","lastTransitionTime":"2026-01-26T12:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.919675 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.919765 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.919780 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.919813 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:42 crc kubenswrapper[4844]: I0126 12:44:42.919837 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:42Z","lastTransitionTime":"2026-01-26T12:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.022841 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.022883 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.022896 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.022912 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.022923 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:43Z","lastTransitionTime":"2026-01-26T12:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.126483 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.126582 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.126612 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.126630 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.126649 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:43Z","lastTransitionTime":"2026-01-26T12:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.228652 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.228700 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.228711 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.228729 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.228740 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:43Z","lastTransitionTime":"2026-01-26T12:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.304083 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 22:38:54.712657087 +0000 UTC Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.312096 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.312096 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.312193 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:43 crc kubenswrapper[4844]: E0126 12:44:43.312300 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:43 crc kubenswrapper[4844]: E0126 12:44:43.312515 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:43 crc kubenswrapper[4844]: E0126 12:44:43.312651 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.324205 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.331384 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.331434 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.331446 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.331464 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.331477 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:43Z","lastTransitionTime":"2026-01-26T12:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.337378 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.350771 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.362730 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.380478 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.405747 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"message\\\":\\\"ft-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0126 12:44:26.471001 6272 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:26.471052 6272 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0126 12:44:26.471060 6272 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:44:26.471100 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.417781 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.437524 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.437562 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.437572 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.437585 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.437609 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:43Z","lastTransitionTime":"2026-01-26T12:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.440452 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.454990 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.466418 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.476994 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.487904 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.499343 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.513391 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.525689 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"012dd78d-465b-41aa-b845-5cd178650e56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a80313396de8bb91760bdf2477da9d233e2387d1ac6addcce62acc4578772c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11638189cf49baf0798a3c7a229b67e05eedf2292d79f884a990a091f21a61c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb28b05d43134d8c4f89d83cd620973c937fd16347910ebf056026f0a3708a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.537794 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.539655 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.539691 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.539704 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.539717 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.539726 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:43Z","lastTransitionTime":"2026-01-26T12:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.552313 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.565611 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.642381 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.642768 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.642784 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.642825 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.642838 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:43Z","lastTransitionTime":"2026-01-26T12:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.664536 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovnkube-controller/1.log" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.667030 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerStarted","Data":"726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f"} Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.667521 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.681124 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.694123 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.704740 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"012dd78d-465b-41aa-b845-5cd178650e56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a80313396de8bb91760bdf2477da9d233e2387d1ac6addcce62acc4578772c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11638189cf49baf0798a3c7a229b67e05eedf2292d79f884a990a091f21a61c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb28b05d43134d8c4f89d83cd620973c937fd16347910ebf056026f0a3708a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.715708 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.727942 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.739344 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.744907 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.744943 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.744952 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.744967 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.744976 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:43Z","lastTransitionTime":"2026-01-26T12:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.752332 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.764130 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.776563 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.786098 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.801483 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.819132 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"message\\\":\\\"ft-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0126 12:44:26.471001 6272 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:26.471052 6272 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0126 12:44:26.471060 6272 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:44:26.471100 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.829937 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.847326 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.847378 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.847387 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.847403 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.847411 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:43Z","lastTransitionTime":"2026-01-26T12:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.851556 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.865406 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.878787 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.891649 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.905678 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:43Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.950478 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.950560 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.950582 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.950662 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:43 crc kubenswrapper[4844]: I0126 12:44:43.950698 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:43Z","lastTransitionTime":"2026-01-26T12:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.053708 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.053782 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.053796 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.053828 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.053844 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:44Z","lastTransitionTime":"2026-01-26T12:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.162049 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.162098 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.162111 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.162125 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.162135 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:44Z","lastTransitionTime":"2026-01-26T12:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.265507 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.265569 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.265587 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.265660 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.265682 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:44Z","lastTransitionTime":"2026-01-26T12:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.304413 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 22:48:28.856359029 +0000 UTC Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.312834 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:44 crc kubenswrapper[4844]: E0126 12:44:44.313038 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.368473 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.368520 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.368531 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.368545 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.368555 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:44Z","lastTransitionTime":"2026-01-26T12:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.477086 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.477168 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.477187 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.477214 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.477232 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:44Z","lastTransitionTime":"2026-01-26T12:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.580990 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.581039 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.581053 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.581067 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.581076 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:44Z","lastTransitionTime":"2026-01-26T12:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.673417 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovnkube-controller/2.log" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.674101 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovnkube-controller/1.log" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.677072 4844 generic.go:334] "Generic (PLEG): container finished" podID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerID="726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f" exitCode=1 Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.677111 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerDied","Data":"726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f"} Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.677146 4844 scope.go:117] "RemoveContainer" containerID="d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.677945 4844 scope.go:117] "RemoveContainer" containerID="726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f" Jan 26 12:44:44 crc kubenswrapper[4844]: E0126 12:44:44.678124 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.685338 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.685381 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.685393 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.685412 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.685425 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:44Z","lastTransitionTime":"2026-01-26T12:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.692741 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:44Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.709039 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:44Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.728927 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:44Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.746988 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:44Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.759307 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:44Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.773512 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:44Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.789966 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:44Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.790786 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.790834 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.790854 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.790877 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.790903 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:44Z","lastTransitionTime":"2026-01-26T12:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.803235 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:44Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.819860 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:44Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.838711 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:44Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.849257 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"012dd78d-465b-41aa-b845-5cd178650e56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a80313396de8bb91760bdf2477da9d233e2387d1ac6addcce62acc4578772c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11638189cf49baf0798a3c7a229b67e05eedf2292d79f884a990a091f21a61c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb28b05d43134d8c4f89d83cd620973c937fd16347910ebf056026f0a3708a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:44Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.861994 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:44Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.883115 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:44Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.893482 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.893530 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.893548 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.893573 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.893591 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:44Z","lastTransitionTime":"2026-01-26T12:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.898374 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:44Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.929574 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c096d3b202896c2e8ae2acf2cbaf2131e2eba775a4bd481112ebd76d974d84\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"message\\\":\\\"ft-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0126 12:44:26.471001 6272 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:26.471052 6272 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0126 12:44:26.471060 6272 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 12:44:26.471100 6272 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:44Z\\\",\\\"message\\\":\\\"none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:43.613387 6501 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0126 12:44:43.613343 6501 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:43.613431 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 12:44:43.613493 6501 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:44Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.944053 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:44Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.962406 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:44Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.979095 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:44Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.997012 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.997060 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.997077 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.997099 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:44 crc kubenswrapper[4844]: I0126 12:44:44.997116 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:44Z","lastTransitionTime":"2026-01-26T12:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.099507 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.099650 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.099687 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.099717 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.099735 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:45Z","lastTransitionTime":"2026-01-26T12:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.202183 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.202431 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.202530 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.202631 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.202721 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:45Z","lastTransitionTime":"2026-01-26T12:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.304915 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 01:00:31.353258343 +0000 UTC Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.306179 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.306239 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.306252 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.306270 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.306286 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:45Z","lastTransitionTime":"2026-01-26T12:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.313006 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:45 crc kubenswrapper[4844]: E0126 12:44:45.313138 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.313179 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.313220 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:45 crc kubenswrapper[4844]: E0126 12:44:45.313254 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:45 crc kubenswrapper[4844]: E0126 12:44:45.313381 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.409545 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.409585 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.409609 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.409626 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.409637 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:45Z","lastTransitionTime":"2026-01-26T12:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.515253 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.515290 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.515300 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.515315 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.515325 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:45Z","lastTransitionTime":"2026-01-26T12:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.617828 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.617868 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.617877 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.617890 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.617899 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:45Z","lastTransitionTime":"2026-01-26T12:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.681831 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovnkube-controller/2.log" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.685690 4844 scope.go:117] "RemoveContainer" containerID="726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f" Jan 26 12:44:45 crc kubenswrapper[4844]: E0126 12:44:45.685891 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.696790 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:45Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.712660 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:45Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.720345 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.720384 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.720395 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.720412 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.720424 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:45Z","lastTransitionTime":"2026-01-26T12:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.730442 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:44Z\\\",\\\"message\\\":\\\"none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:43.613387 6501 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0126 12:44:43.613343 6501 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:43.613431 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 12:44:43.613493 6501 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:45Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.742431 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:45Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.760149 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:45Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.771080 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:45Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.781498 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:45Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.791858 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:45Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.814099 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:45Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.823105 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.823137 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.823145 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.823160 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.823172 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:45Z","lastTransitionTime":"2026-01-26T12:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.830017 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:45Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.842736 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"012dd78d-465b-41aa-b845-5cd178650e56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a80313396de8bb91760bdf2477da9d233e2387d1ac6addcce62acc4578772c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11638189cf49baf0798a3c7a229b67e05eedf2292d79f884a990a091f21a61c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb28b05d43134d8c4f89d83cd620973c937fd16347910ebf056026f0a3708a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:45Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.854503 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:45Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.865464 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:45Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.878155 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:45Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.891238 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:45Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.906015 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:45Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.919955 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:45Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.928211 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.928466 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.928482 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.928505 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.928520 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:45Z","lastTransitionTime":"2026-01-26T12:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:45 crc kubenswrapper[4844]: I0126 12:44:45.934311 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:45Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.030883 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.030924 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.030933 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.030948 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.030959 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:46Z","lastTransitionTime":"2026-01-26T12:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.070121 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.070174 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.070186 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.070203 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.070214 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:46Z","lastTransitionTime":"2026-01-26T12:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:46 crc kubenswrapper[4844]: E0126 12:44:46.084234 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.088393 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.088443 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.088455 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.088471 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.088485 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:46Z","lastTransitionTime":"2026-01-26T12:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:46 crc kubenswrapper[4844]: E0126 12:44:46.104470 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.108338 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.108370 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.108378 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.108392 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.108402 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:46Z","lastTransitionTime":"2026-01-26T12:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:46 crc kubenswrapper[4844]: E0126 12:44:46.120638 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.125122 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.125172 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.125182 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.125196 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.125206 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:46Z","lastTransitionTime":"2026-01-26T12:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:46 crc kubenswrapper[4844]: E0126 12:44:46.136147 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.140009 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.140053 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.140063 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.140077 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.140086 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:46Z","lastTransitionTime":"2026-01-26T12:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:46 crc kubenswrapper[4844]: E0126 12:44:46.152315 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:46Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:46 crc kubenswrapper[4844]: E0126 12:44:46.152470 4844 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.154635 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.154668 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.154681 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.154701 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.154718 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:46Z","lastTransitionTime":"2026-01-26T12:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.257168 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.257199 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.257207 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.257221 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.257229 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:46Z","lastTransitionTime":"2026-01-26T12:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.305424 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 10:56:38.050235949 +0000 UTC Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.312753 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:46 crc kubenswrapper[4844]: E0126 12:44:46.312868 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.359986 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.360038 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.360051 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.360067 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.360077 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:46Z","lastTransitionTime":"2026-01-26T12:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.462932 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.462979 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.462990 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.463007 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.463019 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:46Z","lastTransitionTime":"2026-01-26T12:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.564771 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.564816 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.564828 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.564845 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.564858 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:46Z","lastTransitionTime":"2026-01-26T12:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.667930 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.667988 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.667997 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.668011 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.668020 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:46Z","lastTransitionTime":"2026-01-26T12:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.770088 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.770130 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.770138 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.770153 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.770161 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:46Z","lastTransitionTime":"2026-01-26T12:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.872735 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.872783 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.872795 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.872811 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.872823 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:46Z","lastTransitionTime":"2026-01-26T12:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.975120 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.975187 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.975206 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.975230 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:46 crc kubenswrapper[4844]: I0126 12:44:46.975248 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:46Z","lastTransitionTime":"2026-01-26T12:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.077787 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.077817 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.077829 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.077849 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.077859 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:47Z","lastTransitionTime":"2026-01-26T12:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.180348 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.180409 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.180430 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.180460 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.180495 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:47Z","lastTransitionTime":"2026-01-26T12:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.282957 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.283016 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.283034 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.283061 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.283078 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:47Z","lastTransitionTime":"2026-01-26T12:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.305660 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 08:58:18.490756309 +0000 UTC Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.313092 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.313141 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.313297 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:47 crc kubenswrapper[4844]: E0126 12:44:47.313284 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:47 crc kubenswrapper[4844]: E0126 12:44:47.313431 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:47 crc kubenswrapper[4844]: E0126 12:44:47.313540 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.385661 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.385767 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.385788 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.385844 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.385865 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:47Z","lastTransitionTime":"2026-01-26T12:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.488903 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.488971 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.488988 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.489016 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.489036 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:47Z","lastTransitionTime":"2026-01-26T12:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.591091 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.591131 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.591140 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.591155 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.591165 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:47Z","lastTransitionTime":"2026-01-26T12:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.693319 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.693352 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.693360 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.693390 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.693401 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:47Z","lastTransitionTime":"2026-01-26T12:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.796755 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.796809 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.796820 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.796835 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.796846 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:47Z","lastTransitionTime":"2026-01-26T12:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.899633 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.899674 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.899686 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.899702 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:47 crc kubenswrapper[4844]: I0126 12:44:47.899714 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:47Z","lastTransitionTime":"2026-01-26T12:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.002320 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.002364 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.002400 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.002424 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.002433 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:48Z","lastTransitionTime":"2026-01-26T12:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.105122 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.105189 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.105203 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.105227 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.105240 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:48Z","lastTransitionTime":"2026-01-26T12:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.208223 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.208448 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.208523 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.208676 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.208762 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:48Z","lastTransitionTime":"2026-01-26T12:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.306070 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 18:47:56.742492278 +0000 UTC Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.311725 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.311770 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.311786 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.311811 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.311827 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:48Z","lastTransitionTime":"2026-01-26T12:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.312752 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:48 crc kubenswrapper[4844]: E0126 12:44:48.312918 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.415027 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.415101 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.415117 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.415147 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.415163 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:48Z","lastTransitionTime":"2026-01-26T12:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.518044 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.518106 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.518129 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.518150 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.518165 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:48Z","lastTransitionTime":"2026-01-26T12:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.620575 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.620665 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.620678 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.620774 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.620826 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:48Z","lastTransitionTime":"2026-01-26T12:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.724678 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.724762 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.724773 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.724794 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.724809 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:48Z","lastTransitionTime":"2026-01-26T12:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.828160 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.828221 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.828233 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.828254 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.828266 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:48Z","lastTransitionTime":"2026-01-26T12:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.931358 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.931448 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.931466 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.931493 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:48 crc kubenswrapper[4844]: I0126 12:44:48.931514 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:48Z","lastTransitionTime":"2026-01-26T12:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.035402 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.035460 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.035472 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.035491 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.035503 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:49Z","lastTransitionTime":"2026-01-26T12:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.138949 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.139023 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.139041 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.139067 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.139087 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:49Z","lastTransitionTime":"2026-01-26T12:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.241507 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.241551 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.241560 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.241574 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.241582 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:49Z","lastTransitionTime":"2026-01-26T12:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.307656 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:43:37.80345379 +0000 UTC Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.313009 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.313015 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:49 crc kubenswrapper[4844]: E0126 12:44:49.313113 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.313265 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:49 crc kubenswrapper[4844]: E0126 12:44:49.313446 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:49 crc kubenswrapper[4844]: E0126 12:44:49.313566 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.344668 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.344883 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.344945 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.345005 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.345099 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:49Z","lastTransitionTime":"2026-01-26T12:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.448025 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.448065 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.448077 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.448094 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.448108 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:49Z","lastTransitionTime":"2026-01-26T12:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.550111 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.550135 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.550148 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.550160 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.550169 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:49Z","lastTransitionTime":"2026-01-26T12:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.652866 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.653115 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.653188 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.653260 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.653322 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:49Z","lastTransitionTime":"2026-01-26T12:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.755649 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.755692 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.755704 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.755717 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.755725 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:49Z","lastTransitionTime":"2026-01-26T12:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.858115 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.858156 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.858167 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.858181 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.858193 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:49Z","lastTransitionTime":"2026-01-26T12:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.960113 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.960444 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.960533 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.960664 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:49 crc kubenswrapper[4844]: I0126 12:44:49.960774 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:49Z","lastTransitionTime":"2026-01-26T12:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.063502 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.063751 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.063836 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.063912 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.064028 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:50Z","lastTransitionTime":"2026-01-26T12:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.165941 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.165976 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.165984 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.165998 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.166006 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:50Z","lastTransitionTime":"2026-01-26T12:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.268873 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.268933 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.268953 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.268978 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.268995 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:50Z","lastTransitionTime":"2026-01-26T12:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.307930 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 15:59:29.908825187 +0000 UTC Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.312282 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:50 crc kubenswrapper[4844]: E0126 12:44:50.312429 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.373424 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.373509 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.373550 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.373583 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.373650 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:50Z","lastTransitionTime":"2026-01-26T12:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.476494 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.476558 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.476575 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.476631 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.476650 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:50Z","lastTransitionTime":"2026-01-26T12:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.579480 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.579541 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.579556 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.579576 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.579593 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:50Z","lastTransitionTime":"2026-01-26T12:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.682286 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.682341 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.682355 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.682376 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.682390 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:50Z","lastTransitionTime":"2026-01-26T12:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.785218 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.785267 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.785284 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.785306 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.785324 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:50Z","lastTransitionTime":"2026-01-26T12:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.889303 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.889383 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.889411 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.889443 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.889465 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:50Z","lastTransitionTime":"2026-01-26T12:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.992570 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.992692 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.992748 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.992774 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:50 crc kubenswrapper[4844]: I0126 12:44:50.992829 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:50Z","lastTransitionTime":"2026-01-26T12:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.095448 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.095491 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.095502 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.095516 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.095528 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:51Z","lastTransitionTime":"2026-01-26T12:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.198701 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.198812 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.198832 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.198859 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.198880 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:51Z","lastTransitionTime":"2026-01-26T12:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.302555 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.302610 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.302619 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.302633 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.302641 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:51Z","lastTransitionTime":"2026-01-26T12:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.308830 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 05:29:52.62975627 +0000 UTC Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.312241 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.312292 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.312314 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:51 crc kubenswrapper[4844]: E0126 12:44:51.312538 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:51 crc kubenswrapper[4844]: E0126 12:44:51.312735 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:51 crc kubenswrapper[4844]: E0126 12:44:51.312839 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.405931 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.405990 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.406010 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.406040 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.406062 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:51Z","lastTransitionTime":"2026-01-26T12:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.508530 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.508636 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.508656 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.508683 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.508701 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:51Z","lastTransitionTime":"2026-01-26T12:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.611465 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.611532 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.611555 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.611583 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.611645 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:51Z","lastTransitionTime":"2026-01-26T12:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.713994 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.714068 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.714092 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.714117 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.714135 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:51Z","lastTransitionTime":"2026-01-26T12:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.817781 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.817850 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.817872 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.817897 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.817917 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:51Z","lastTransitionTime":"2026-01-26T12:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.921206 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.921311 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.921363 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.921394 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:51 crc kubenswrapper[4844]: I0126 12:44:51.921414 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:51Z","lastTransitionTime":"2026-01-26T12:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.024768 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.024823 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.024839 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.024861 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.024878 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:52Z","lastTransitionTime":"2026-01-26T12:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.128558 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.128674 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.128698 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.128727 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.128748 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:52Z","lastTransitionTime":"2026-01-26T12:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.231773 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.231841 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.231857 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.231882 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.231899 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:52Z","lastTransitionTime":"2026-01-26T12:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.309017 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 00:41:00.977263705 +0000 UTC Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.312392 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:52 crc kubenswrapper[4844]: E0126 12:44:52.312570 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.335121 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.335246 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.335301 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.335329 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.335348 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:52Z","lastTransitionTime":"2026-01-26T12:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.438370 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.438512 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.438533 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.438552 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.438563 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:52Z","lastTransitionTime":"2026-01-26T12:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.548156 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.548192 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.548206 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.548223 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.548237 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:52Z","lastTransitionTime":"2026-01-26T12:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.651090 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.651136 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.651147 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.651165 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.651177 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:52Z","lastTransitionTime":"2026-01-26T12:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.754196 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.754261 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.754279 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.754304 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.754326 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:52Z","lastTransitionTime":"2026-01-26T12:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.857795 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.857884 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.857902 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.857957 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.857975 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:52Z","lastTransitionTime":"2026-01-26T12:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.960407 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.960545 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.960572 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.960618 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:52 crc kubenswrapper[4844]: I0126 12:44:52.960636 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:52Z","lastTransitionTime":"2026-01-26T12:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.064168 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.064220 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.064235 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.064258 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.064273 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:53Z","lastTransitionTime":"2026-01-26T12:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.167890 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.167928 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.167941 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.167958 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.167970 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:53Z","lastTransitionTime":"2026-01-26T12:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.270823 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.270896 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.270925 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.270953 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.270971 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:53Z","lastTransitionTime":"2026-01-26T12:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.309387 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 14:44:23.062500244 +0000 UTC Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.312931 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.313007 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:53 crc kubenswrapper[4844]: E0126 12:44:53.313134 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.313170 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:53 crc kubenswrapper[4844]: E0126 12:44:53.313336 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:53 crc kubenswrapper[4844]: E0126 12:44:53.313448 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.338437 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.359587 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.374348 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.374466 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.374486 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.375189 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.375726 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:53Z","lastTransitionTime":"2026-01-26T12:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.378845 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.405707 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.437467 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:44Z\\\",\\\"message\\\":\\\"none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:43.613387 6501 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0126 12:44:43.613343 6501 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:43.613431 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 12:44:43.613493 6501 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.450404 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.466948 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.478480 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.478541 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.478565 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.478630 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.478663 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:53Z","lastTransitionTime":"2026-01-26T12:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.483018 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.499097 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.515903 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.553276 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.570978 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.581889 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.581938 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.581952 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.581980 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.582005 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:53Z","lastTransitionTime":"2026-01-26T12:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.585944 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"012dd78d-465b-41aa-b845-5cd178650e56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a80313396de8bb91760bdf2477da9d233e2387d1ac6addcce62acc4578772c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11638189cf49baf0798a3c7a229b67e05eedf2292d79f884a990a091f21a61c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb28b05d43134d8c4f89d83cd620973c937fd16347910ebf056026f0a3708a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.606378 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.626360 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.641361 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.657200 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.673539 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:53Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.684556 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.684649 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.684670 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.684695 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.684714 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:53Z","lastTransitionTime":"2026-01-26T12:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.787812 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.787883 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.787894 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.787910 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.787924 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:53Z","lastTransitionTime":"2026-01-26T12:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.891197 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.891263 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.891284 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.891312 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.891332 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:53Z","lastTransitionTime":"2026-01-26T12:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.994245 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.994383 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.994409 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.994437 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:53 crc kubenswrapper[4844]: I0126 12:44:53.994458 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:53Z","lastTransitionTime":"2026-01-26T12:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.098215 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.098621 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.098773 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.098911 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.099047 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:54Z","lastTransitionTime":"2026-01-26T12:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.202944 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.203009 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.203030 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.203059 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.203084 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:54Z","lastTransitionTime":"2026-01-26T12:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.250884 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs\") pod \"network-metrics-daemon-gxnj7\" (UID: \"c69496f6-7f67-4cca-9c9f-420e5567b165\") " pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:54 crc kubenswrapper[4844]: E0126 12:44:54.251085 4844 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:44:54 crc kubenswrapper[4844]: E0126 12:44:54.251209 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs podName:c69496f6-7f67-4cca-9c9f-420e5567b165 nodeName:}" failed. No retries permitted until 2026-01-26 12:45:26.251175688 +0000 UTC m=+103.184543330 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs") pod "network-metrics-daemon-gxnj7" (UID: "c69496f6-7f67-4cca-9c9f-420e5567b165") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.305581 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.305701 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.305725 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.305754 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.305779 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:54Z","lastTransitionTime":"2026-01-26T12:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.309951 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 18:37:30.845330586 +0000 UTC Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.312348 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:54 crc kubenswrapper[4844]: E0126 12:44:54.312585 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.408211 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.408275 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.408287 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.408302 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.408311 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:54Z","lastTransitionTime":"2026-01-26T12:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.511249 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.511314 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.511332 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.511356 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.511372 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:54Z","lastTransitionTime":"2026-01-26T12:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.614546 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.614635 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.614652 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.614670 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.614682 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:54Z","lastTransitionTime":"2026-01-26T12:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.716882 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.716947 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.716970 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.717000 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.717023 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:54Z","lastTransitionTime":"2026-01-26T12:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.820231 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.820312 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.820330 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.820358 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.820376 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:54Z","lastTransitionTime":"2026-01-26T12:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.923852 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.923900 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.923909 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.923924 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:54 crc kubenswrapper[4844]: I0126 12:44:54.923933 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:54Z","lastTransitionTime":"2026-01-26T12:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.026813 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.026849 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.026858 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.026871 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.026880 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:55Z","lastTransitionTime":"2026-01-26T12:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.129067 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.129132 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.129155 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.129178 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.129193 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:55Z","lastTransitionTime":"2026-01-26T12:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.232151 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.232195 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.232207 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.232225 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.232235 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:55Z","lastTransitionTime":"2026-01-26T12:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.310586 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:04:20.122407915 +0000 UTC Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.313063 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.313107 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.313156 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:55 crc kubenswrapper[4844]: E0126 12:44:55.313343 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:55 crc kubenswrapper[4844]: E0126 12:44:55.313480 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:55 crc kubenswrapper[4844]: E0126 12:44:55.313571 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.334656 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.334698 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.334708 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.334723 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.334734 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:55Z","lastTransitionTime":"2026-01-26T12:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.437437 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.437493 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.437508 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.437527 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.437540 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:55Z","lastTransitionTime":"2026-01-26T12:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.540527 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.540783 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.540799 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.540814 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.540828 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:55Z","lastTransitionTime":"2026-01-26T12:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.643655 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.643705 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.643713 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.643730 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.643743 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:55Z","lastTransitionTime":"2026-01-26T12:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.747267 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.747322 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.747335 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.747354 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.747366 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:55Z","lastTransitionTime":"2026-01-26T12:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.849956 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.850029 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.850053 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.850081 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.850105 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:55Z","lastTransitionTime":"2026-01-26T12:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.952856 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.952922 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.952940 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.952964 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:55 crc kubenswrapper[4844]: I0126 12:44:55.952984 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:55Z","lastTransitionTime":"2026-01-26T12:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.056035 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.056080 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.056091 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.056108 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.056118 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:56Z","lastTransitionTime":"2026-01-26T12:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.159521 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.159628 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.159660 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.159690 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.159713 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:56Z","lastTransitionTime":"2026-01-26T12:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.263230 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.263327 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.263353 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.263387 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.263412 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:56Z","lastTransitionTime":"2026-01-26T12:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.310939 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 16:12:34.854575524 +0000 UTC Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.312218 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:56 crc kubenswrapper[4844]: E0126 12:44:56.312417 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.366931 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.366992 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.367015 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.367039 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.367056 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:56Z","lastTransitionTime":"2026-01-26T12:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.449020 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.449095 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.449111 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.449133 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.449149 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:56Z","lastTransitionTime":"2026-01-26T12:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:56 crc kubenswrapper[4844]: E0126 12:44:56.466428 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.470664 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.470718 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.470735 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.470756 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.470771 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:56Z","lastTransitionTime":"2026-01-26T12:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:56 crc kubenswrapper[4844]: E0126 12:44:56.486196 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.489924 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.489956 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.489970 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.489986 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.489999 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:56Z","lastTransitionTime":"2026-01-26T12:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:56 crc kubenswrapper[4844]: E0126 12:44:56.502676 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.506187 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.506257 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.506282 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.506311 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.506334 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:56Z","lastTransitionTime":"2026-01-26T12:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:56 crc kubenswrapper[4844]: E0126 12:44:56.522155 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.525878 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.525926 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.525937 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.525953 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.525965 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:56Z","lastTransitionTime":"2026-01-26T12:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:56 crc kubenswrapper[4844]: E0126 12:44:56.537477 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:56Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:56 crc kubenswrapper[4844]: E0126 12:44:56.537684 4844 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.539299 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.539347 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.539362 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.539384 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.539400 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:56Z","lastTransitionTime":"2026-01-26T12:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.642207 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.642247 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.642258 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.642273 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.642284 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:56Z","lastTransitionTime":"2026-01-26T12:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.744763 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.744826 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.744849 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.744880 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.744903 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:56Z","lastTransitionTime":"2026-01-26T12:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.847954 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.848025 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.848045 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.848071 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.848093 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:56Z","lastTransitionTime":"2026-01-26T12:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.951296 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.951369 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.951384 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.951411 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:56 crc kubenswrapper[4844]: I0126 12:44:56.951430 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:56Z","lastTransitionTime":"2026-01-26T12:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.055137 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.055194 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.055212 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.055237 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.055258 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:57Z","lastTransitionTime":"2026-01-26T12:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.158414 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.158497 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.158520 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.158550 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.158576 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:57Z","lastTransitionTime":"2026-01-26T12:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.262825 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.262888 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.262904 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.262930 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.262949 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:57Z","lastTransitionTime":"2026-01-26T12:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.311870 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:25:44.063763769 +0000 UTC Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.312159 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:57 crc kubenswrapper[4844]: E0126 12:44:57.312343 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.312411 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.312464 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:57 crc kubenswrapper[4844]: E0126 12:44:57.312731 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:57 crc kubenswrapper[4844]: E0126 12:44:57.312895 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.314743 4844 scope.go:117] "RemoveContainer" containerID="726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f" Jan 26 12:44:57 crc kubenswrapper[4844]: E0126 12:44:57.315503 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.366139 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.366189 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.366201 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.366218 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.366228 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:57Z","lastTransitionTime":"2026-01-26T12:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.469804 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.469865 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.469882 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.469906 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.469925 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:57Z","lastTransitionTime":"2026-01-26T12:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.572885 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.572937 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.572951 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.572972 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.572988 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:57Z","lastTransitionTime":"2026-01-26T12:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.677015 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.677079 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.677090 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.677157 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.677180 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:57Z","lastTransitionTime":"2026-01-26T12:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.779720 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.779787 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.779810 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.779840 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.779860 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:57Z","lastTransitionTime":"2026-01-26T12:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.883042 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.883096 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.883111 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.883137 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.883156 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:57Z","lastTransitionTime":"2026-01-26T12:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.985477 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.985515 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.985524 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.985538 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:57 crc kubenswrapper[4844]: I0126 12:44:57.985546 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:57Z","lastTransitionTime":"2026-01-26T12:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.088670 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.088746 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.088755 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.088791 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.088802 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:58Z","lastTransitionTime":"2026-01-26T12:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.192175 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.192218 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.192229 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.192244 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.192255 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:58Z","lastTransitionTime":"2026-01-26T12:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.296353 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.296410 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.296422 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.296438 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.296449 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:58Z","lastTransitionTime":"2026-01-26T12:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.312854 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:13:00.781688033 +0000 UTC Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.312935 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:44:58 crc kubenswrapper[4844]: E0126 12:44:58.313084 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.399513 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.399568 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.399586 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.399630 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.399640 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:58Z","lastTransitionTime":"2026-01-26T12:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.502179 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.502222 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.502233 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.502253 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.502265 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:58Z","lastTransitionTime":"2026-01-26T12:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.605049 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.605124 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.605140 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.605157 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.605171 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:58Z","lastTransitionTime":"2026-01-26T12:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.707703 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.707742 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.707752 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.707766 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.707777 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:58Z","lastTransitionTime":"2026-01-26T12:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.731901 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zb9kx_467433a4-64be-4a14-beb2-657370e9865f/kube-multus/0.log" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.732003 4844 generic.go:334] "Generic (PLEG): container finished" podID="467433a4-64be-4a14-beb2-657370e9865f" containerID="9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb" exitCode=1 Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.732057 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zb9kx" event={"ID":"467433a4-64be-4a14-beb2-657370e9865f","Type":"ContainerDied","Data":"9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb"} Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.732783 4844 scope.go:117] "RemoveContainer" containerID="9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.760856 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.776691 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.788326 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.801798 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.822528 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.823038 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.823051 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.823090 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.823101 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:58Z","lastTransitionTime":"2026-01-26T12:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.843915 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.861728 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.876205 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.888151 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.900622 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"012dd78d-465b-41aa-b845-5cd178650e56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a80313396de8bb91760bdf2477da9d233e2387d1ac6addcce62acc4578772c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11638189cf49baf0798a3c7a229b67e05eedf2292d79f884a990a091f21a61c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb28b05d43134d8c4f89d83cd620973c937fd16347910ebf056026f0a3708a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.912401 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.925658 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.925696 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.925707 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.925719 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.925730 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:58Z","lastTransitionTime":"2026-01-26T12:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.926789 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.937805 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.948258 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.957731 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.971422 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:57Z\\\",\\\"message\\\":\\\"2026-01-26T12:44:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3f62d99b-10f3-4489-9ddd-fa2f775e6b8e\\\\n2026-01-26T12:44:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3f62d99b-10f3-4489-9ddd-fa2f775e6b8e to /host/opt/cni/bin/\\\\n2026-01-26T12:44:11Z [verbose] multus-daemon started\\\\n2026-01-26T12:44:11Z [verbose] Readiness Indicator file check\\\\n2026-01-26T12:44:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:58 crc kubenswrapper[4844]: I0126 12:44:58.985090 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.000712 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:58Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.020377 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:44Z\\\",\\\"message\\\":\\\"none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:43.613387 6501 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0126 12:44:43.613343 6501 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:43.613431 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 12:44:43.613493 6501 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.028031 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.028080 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.028091 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.028106 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.028119 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:59Z","lastTransitionTime":"2026-01-26T12:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.131204 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.131270 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.131295 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.131324 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.131346 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:59Z","lastTransitionTime":"2026-01-26T12:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.233776 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.233852 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.233880 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.233907 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.233924 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:59Z","lastTransitionTime":"2026-01-26T12:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.312186 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.312259 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:44:59 crc kubenswrapper[4844]: E0126 12:44:59.312389 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.312450 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:44:59 crc kubenswrapper[4844]: E0126 12:44:59.312548 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:44:59 crc kubenswrapper[4844]: E0126 12:44:59.312636 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.313261 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:26:58.96798191 +0000 UTC Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.335577 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.335639 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.335656 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.335672 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.335684 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:59Z","lastTransitionTime":"2026-01-26T12:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.438182 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.438246 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.438256 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.438277 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.438289 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:59Z","lastTransitionTime":"2026-01-26T12:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.540802 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.540874 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.540890 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.540916 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.540936 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:59Z","lastTransitionTime":"2026-01-26T12:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.643478 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.643526 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.643538 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.643554 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.643566 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:59Z","lastTransitionTime":"2026-01-26T12:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.736481 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zb9kx_467433a4-64be-4a14-beb2-657370e9865f/kube-multus/0.log" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.737525 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zb9kx" event={"ID":"467433a4-64be-4a14-beb2-657370e9865f","Type":"ContainerStarted","Data":"9be6a90cf1d7f75bb43391968d164c8726b7626d7dc649cd85f10c4d13424ab9"} Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.745290 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.745330 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.745346 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.745367 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.745380 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:59Z","lastTransitionTime":"2026-01-26T12:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.755555 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.767783 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.778430 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.788361 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.798949 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.812192 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.827180 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.840116 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"012dd78d-465b-41aa-b845-5cd178650e56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a80313396de8bb91760bdf2477da9d233e2387d1ac6addcce62acc4578772c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11638189cf49baf0798a3c7a229b67e05eedf2292d79f884a990a091f21a61c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb28b05d43134d8c4f89d83cd620973c937fd16347910ebf056026f0a3708a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.848076 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.848154 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.848165 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.848179 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.848189 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:59Z","lastTransitionTime":"2026-01-26T12:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.855229 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.869154 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.880891 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.894930 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.906041 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.917722 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9be6a90cf1d7f75bb43391968d164c8726b7626d7dc649cd85f10c4d13424ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:57Z\\\",\\\"message\\\":\\\"2026-01-26T12:44:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3f62d99b-10f3-4489-9ddd-fa2f775e6b8e\\\\n2026-01-26T12:44:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3f62d99b-10f3-4489-9ddd-fa2f775e6b8e to /host/opt/cni/bin/\\\\n2026-01-26T12:44:11Z [verbose] multus-daemon started\\\\n2026-01-26T12:44:11Z [verbose] Readiness Indicator file check\\\\n2026-01-26T12:44:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.929692 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.943506 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.950513 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.950637 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.950663 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.950691 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.950708 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:44:59Z","lastTransitionTime":"2026-01-26T12:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.961834 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:44Z\\\",\\\"message\\\":\\\"none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:43.613387 6501 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0126 12:44:43.613343 6501 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:43.613431 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 12:44:43.613493 6501 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:44:59 crc kubenswrapper[4844]: I0126 12:44:59.973960 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:44:59Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.053297 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.053330 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.053340 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.053354 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.053363 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:00Z","lastTransitionTime":"2026-01-26T12:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.155495 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.155680 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.155705 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.155726 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.155738 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:00Z","lastTransitionTime":"2026-01-26T12:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.257912 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.257960 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.257973 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.257990 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.258002 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:00Z","lastTransitionTime":"2026-01-26T12:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.312235 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:00 crc kubenswrapper[4844]: E0126 12:45:00.312390 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.314393 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:43:57.568026267 +0000 UTC Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.360313 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.360357 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.360367 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.360383 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.360396 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:00Z","lastTransitionTime":"2026-01-26T12:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.462838 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.462889 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.462901 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.462916 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.462925 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:00Z","lastTransitionTime":"2026-01-26T12:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.565279 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.565316 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.565327 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.565342 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.565355 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:00Z","lastTransitionTime":"2026-01-26T12:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.668266 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.668331 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.668347 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.668365 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.668379 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:00Z","lastTransitionTime":"2026-01-26T12:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.770426 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.770468 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.770478 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.770495 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.770507 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:00Z","lastTransitionTime":"2026-01-26T12:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.873065 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.873126 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.873146 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.873176 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.873197 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:00Z","lastTransitionTime":"2026-01-26T12:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.979134 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.979187 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.979200 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.979219 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:00 crc kubenswrapper[4844]: I0126 12:45:00.979231 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:00Z","lastTransitionTime":"2026-01-26T12:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.082384 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.082440 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.082450 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.082465 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.082477 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:01Z","lastTransitionTime":"2026-01-26T12:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.185376 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.185439 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.185456 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.185472 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.185482 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:01Z","lastTransitionTime":"2026-01-26T12:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.287918 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.287957 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.287965 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.287979 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.287988 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:01Z","lastTransitionTime":"2026-01-26T12:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.312795 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:01 crc kubenswrapper[4844]: E0126 12:45:01.313004 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.313038 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:01 crc kubenswrapper[4844]: E0126 12:45:01.313151 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.312826 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:01 crc kubenswrapper[4844]: E0126 12:45:01.313221 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.314982 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:36:57.664129272 +0000 UTC Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.390933 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.390987 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.390998 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.391015 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.391027 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:01Z","lastTransitionTime":"2026-01-26T12:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.493559 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.493624 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.493635 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.493652 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.493663 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:01Z","lastTransitionTime":"2026-01-26T12:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.596558 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.596689 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.596722 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.596749 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.596766 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:01Z","lastTransitionTime":"2026-01-26T12:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.698990 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.699049 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.699066 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.699089 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.699106 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:01Z","lastTransitionTime":"2026-01-26T12:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.801966 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.802030 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.802048 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.802073 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.802091 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:01Z","lastTransitionTime":"2026-01-26T12:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.905161 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.905208 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.905220 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.905236 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:01 crc kubenswrapper[4844]: I0126 12:45:01.905248 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:01Z","lastTransitionTime":"2026-01-26T12:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.007964 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.008002 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.008012 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.008023 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.008034 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:02Z","lastTransitionTime":"2026-01-26T12:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.110432 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.110482 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.110494 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.110512 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.110524 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:02Z","lastTransitionTime":"2026-01-26T12:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.213242 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.213529 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.213620 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.213698 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.213774 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:02Z","lastTransitionTime":"2026-01-26T12:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.312290 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:02 crc kubenswrapper[4844]: E0126 12:45:02.312415 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.315121 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:07:27.854290757 +0000 UTC Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.316027 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.316071 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.316089 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.316106 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.316118 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:02Z","lastTransitionTime":"2026-01-26T12:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.419240 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.419354 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.419415 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.419443 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.419463 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:02Z","lastTransitionTime":"2026-01-26T12:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.521553 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.521593 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.521627 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.521647 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.521657 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:02Z","lastTransitionTime":"2026-01-26T12:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.624397 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.624463 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.624480 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.624505 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.624528 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:02Z","lastTransitionTime":"2026-01-26T12:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.727423 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.727481 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.727498 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.727523 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.727541 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:02Z","lastTransitionTime":"2026-01-26T12:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.830263 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.830336 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.830356 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.830383 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.830400 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:02Z","lastTransitionTime":"2026-01-26T12:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.933378 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.933458 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.933478 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.933509 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:02 crc kubenswrapper[4844]: I0126 12:45:02.933527 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:02Z","lastTransitionTime":"2026-01-26T12:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.036577 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.036689 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.036716 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.036744 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.036765 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:03Z","lastTransitionTime":"2026-01-26T12:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.139051 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.139103 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.139120 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.139144 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.139160 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:03Z","lastTransitionTime":"2026-01-26T12:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.242339 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.242422 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.242445 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.242474 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.242496 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:03Z","lastTransitionTime":"2026-01-26T12:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.312955 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.313027 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:03 crc kubenswrapper[4844]: E0126 12:45:03.313243 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.313271 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:03 crc kubenswrapper[4844]: E0126 12:45:03.313437 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:03 crc kubenswrapper[4844]: E0126 12:45:03.313781 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.315295 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:41:40.496613488 +0000 UTC Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.332430 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.344515 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.344588 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.344656 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.344705 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.344737 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:03Z","lastTransitionTime":"2026-01-26T12:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.352669 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.368231 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9be6a90cf1d7f75bb43391968d164c8726b7626d7dc649cd85f10c4d13424ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:57Z\\\",\\\"message\\\":\\\"2026-01-26T12:44:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3f62d99b-10f3-4489-9ddd-fa2f775e6b8e\\\\n2026-01-26T12:44:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3f62d99b-10f3-4489-9ddd-fa2f775e6b8e to /host/opt/cni/bin/\\\\n2026-01-26T12:44:11Z [verbose] multus-daemon started\\\\n2026-01-26T12:44:11Z [verbose] Readiness Indicator file check\\\\n2026-01-26T12:44:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.381331 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.395677 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.413335 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:44Z\\\",\\\"message\\\":\\\"none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:43.613387 6501 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0126 12:44:43.613343 6501 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:43.613431 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 12:44:43.613493 6501 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.422436 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.439510 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.449693 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.449822 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.449850 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.449866 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.449900 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:03Z","lastTransitionTime":"2026-01-26T12:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.450911 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.460777 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.472299 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.483859 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.495802 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.507080 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.522979 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"012dd78d-465b-41aa-b845-5cd178650e56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a80313396de8bb91760bdf2477da9d233e2387d1ac6addcce62acc4578772c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11638189cf49baf0798a3c7a229b67e05eedf2292d79f884a990a091f21a61c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb28b05d43134d8c4f89d83cd620973c937fd16347910ebf056026f0a3708a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.537192 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.549261 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.552092 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.552268 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.552391 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.552576 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.552716 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:03Z","lastTransitionTime":"2026-01-26T12:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.563941 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:03Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.655404 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.655703 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.655790 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.655883 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.655981 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:03Z","lastTransitionTime":"2026-01-26T12:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.757560 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.757660 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.757673 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.757689 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.757700 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:03Z","lastTransitionTime":"2026-01-26T12:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.860148 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.860209 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.860226 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.860250 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.860268 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:03Z","lastTransitionTime":"2026-01-26T12:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.963400 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.963465 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.963487 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.963513 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:03 crc kubenswrapper[4844]: I0126 12:45:03.963531 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:03Z","lastTransitionTime":"2026-01-26T12:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.067124 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.067183 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.067195 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.067213 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.067225 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:04Z","lastTransitionTime":"2026-01-26T12:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.171171 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.171250 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.171275 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.171306 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.171327 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:04Z","lastTransitionTime":"2026-01-26T12:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.275514 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.275592 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.275681 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.275717 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.275741 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:04Z","lastTransitionTime":"2026-01-26T12:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.312269 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:04 crc kubenswrapper[4844]: E0126 12:45:04.312796 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.315517 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 20:38:59.748658683 +0000 UTC Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.379040 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.379098 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.379115 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.379137 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.379153 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:04Z","lastTransitionTime":"2026-01-26T12:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.481132 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.481193 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.481208 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.481231 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.481245 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:04Z","lastTransitionTime":"2026-01-26T12:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.583930 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.583979 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.583994 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.584016 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.584031 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:04Z","lastTransitionTime":"2026-01-26T12:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.687289 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.687347 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.687363 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.687383 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.687397 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:04Z","lastTransitionTime":"2026-01-26T12:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.791238 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.794133 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.794168 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.794199 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.794219 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:04Z","lastTransitionTime":"2026-01-26T12:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.897916 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.897988 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.898007 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.898033 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:04 crc kubenswrapper[4844]: I0126 12:45:04.898053 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:04Z","lastTransitionTime":"2026-01-26T12:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.001777 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.001845 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.001863 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.001889 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.001908 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:05Z","lastTransitionTime":"2026-01-26T12:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.105046 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.105104 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.105120 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.105144 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.105161 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:05Z","lastTransitionTime":"2026-01-26T12:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.208750 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.208824 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.208844 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.208899 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.208920 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:05Z","lastTransitionTime":"2026-01-26T12:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:05 crc kubenswrapper[4844]: E0126 12:45:05.271197 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:09.271175133 +0000 UTC m=+146.204542745 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.271560 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.271909 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:05 crc kubenswrapper[4844]: E0126 12:45:05.272025 4844 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:45:05 crc kubenswrapper[4844]: E0126 12:45:05.272688 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:46:09.272675391 +0000 UTC m=+146.206043003 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.272819 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:05 crc kubenswrapper[4844]: E0126 12:45:05.272966 4844 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:45:05 crc kubenswrapper[4844]: E0126 12:45:05.273152 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 12:46:09.27312878 +0000 UTC m=+146.206496402 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.311570 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.311653 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.311666 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.311684 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.311725 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:05Z","lastTransitionTime":"2026-01-26T12:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.312328 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.312471 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:05 crc kubenswrapper[4844]: E0126 12:45:05.312645 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.312705 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:05 crc kubenswrapper[4844]: E0126 12:45:05.312833 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:05 crc kubenswrapper[4844]: E0126 12:45:05.312761 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.315897 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:40:32.58054545 +0000 UTC Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.374353 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.374435 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:05 crc kubenswrapper[4844]: E0126 12:45:05.374790 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:45:05 crc kubenswrapper[4844]: E0126 12:45:05.374849 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:45:05 crc kubenswrapper[4844]: E0126 12:45:05.374863 4844 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:45:05 crc kubenswrapper[4844]: E0126 12:45:05.374925 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 12:46:09.374905269 +0000 UTC m=+146.308272881 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:45:05 crc kubenswrapper[4844]: E0126 12:45:05.375177 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 12:45:05 crc kubenswrapper[4844]: E0126 12:45:05.375201 4844 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 12:45:05 crc kubenswrapper[4844]: E0126 12:45:05.375211 4844 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:45:05 crc kubenswrapper[4844]: E0126 12:45:05.375239 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 12:46:09.375230346 +0000 UTC m=+146.308598028 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.414524 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.414565 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.414576 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.414591 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.414619 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:05Z","lastTransitionTime":"2026-01-26T12:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.517505 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.517540 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.517556 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.517569 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.517579 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:05Z","lastTransitionTime":"2026-01-26T12:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.619911 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.619939 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.619949 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.619961 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.619970 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:05Z","lastTransitionTime":"2026-01-26T12:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.722623 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.722649 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.722658 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.722670 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.722679 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:05Z","lastTransitionTime":"2026-01-26T12:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.825668 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.825716 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.825733 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.825752 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.825763 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:05Z","lastTransitionTime":"2026-01-26T12:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.928182 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.928422 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.928492 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.928707 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:05 crc kubenswrapper[4844]: I0126 12:45:05.928814 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:05Z","lastTransitionTime":"2026-01-26T12:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.032268 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.032558 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.032642 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.032713 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.032792 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:06Z","lastTransitionTime":"2026-01-26T12:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.135366 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.135389 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.135396 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.135408 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.135417 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:06Z","lastTransitionTime":"2026-01-26T12:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.238426 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.238490 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.238508 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.238531 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.238549 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:06Z","lastTransitionTime":"2026-01-26T12:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.312154 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:06 crc kubenswrapper[4844]: E0126 12:45:06.312359 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.316711 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 01:29:21.959788338 +0000 UTC Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.341145 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.341202 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.341220 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.341244 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.341265 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:06Z","lastTransitionTime":"2026-01-26T12:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.444943 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.445040 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.445058 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.445089 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.445110 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:06Z","lastTransitionTime":"2026-01-26T12:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.548208 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.548269 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.548287 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.548311 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.548330 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:06Z","lastTransitionTime":"2026-01-26T12:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.653790 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.653861 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.653887 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.653921 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.653944 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:06Z","lastTransitionTime":"2026-01-26T12:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.677070 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.677145 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.677162 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.677185 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.677207 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:06Z","lastTransitionTime":"2026-01-26T12:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:06 crc kubenswrapper[4844]: E0126 12:45:06.700789 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.706573 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.706667 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.706688 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.706711 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.706730 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:06Z","lastTransitionTime":"2026-01-26T12:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:06 crc kubenswrapper[4844]: E0126 12:45:06.728257 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.733903 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.733965 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.733988 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.734018 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.734043 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:06Z","lastTransitionTime":"2026-01-26T12:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:06 crc kubenswrapper[4844]: E0126 12:45:06.761105 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.767372 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.767463 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.767490 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.767529 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.767558 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:06Z","lastTransitionTime":"2026-01-26T12:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:06 crc kubenswrapper[4844]: E0126 12:45:06.788858 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.794954 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.795021 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.795040 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.795067 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.795087 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:06Z","lastTransitionTime":"2026-01-26T12:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:06 crc kubenswrapper[4844]: E0126 12:45:06.816696 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:06Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:06 crc kubenswrapper[4844]: E0126 12:45:06.816921 4844 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.818957 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.819020 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.819038 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.819064 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.819083 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:06Z","lastTransitionTime":"2026-01-26T12:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.922881 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.922994 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.923067 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.923099 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:06 crc kubenswrapper[4844]: I0126 12:45:06.923121 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:06Z","lastTransitionTime":"2026-01-26T12:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.026233 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.026568 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.026801 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.026944 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.027121 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:07Z","lastTransitionTime":"2026-01-26T12:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.130231 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.130580 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.130774 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.130939 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.131153 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:07Z","lastTransitionTime":"2026-01-26T12:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.234473 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.234548 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.234569 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.234592 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.234638 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:07Z","lastTransitionTime":"2026-01-26T12:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.312195 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.312284 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:07 crc kubenswrapper[4844]: E0126 12:45:07.312399 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.312462 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:07 crc kubenswrapper[4844]: E0126 12:45:07.312554 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:07 crc kubenswrapper[4844]: E0126 12:45:07.312777 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.317521 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 07:35:32.118307328 +0000 UTC Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.337639 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.337703 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.337721 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.337747 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.337765 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:07Z","lastTransitionTime":"2026-01-26T12:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.443774 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.443879 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.443937 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.443962 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.443980 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:07Z","lastTransitionTime":"2026-01-26T12:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.546166 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.546231 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.546249 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.546274 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.546292 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:07Z","lastTransitionTime":"2026-01-26T12:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.649035 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.649100 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.649119 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.649143 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.649162 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:07Z","lastTransitionTime":"2026-01-26T12:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.752530 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.752628 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.752647 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.752671 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.752690 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:07Z","lastTransitionTime":"2026-01-26T12:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.856319 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.856384 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.856402 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.856426 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.856446 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:07Z","lastTransitionTime":"2026-01-26T12:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.959865 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.959951 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.959971 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.959999 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:07 crc kubenswrapper[4844]: I0126 12:45:07.960019 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:07Z","lastTransitionTime":"2026-01-26T12:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.063790 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.063847 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.063865 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.063889 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.063907 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:08Z","lastTransitionTime":"2026-01-26T12:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.167531 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.167665 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.167689 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.167717 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.167740 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:08Z","lastTransitionTime":"2026-01-26T12:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.270824 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.270877 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.270889 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.270907 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.270921 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:08Z","lastTransitionTime":"2026-01-26T12:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.312758 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:08 crc kubenswrapper[4844]: E0126 12:45:08.312927 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.318117 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:28:12.109196309 +0000 UTC Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.373873 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.373958 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.373978 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.374010 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.374030 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:08Z","lastTransitionTime":"2026-01-26T12:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.477810 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.477916 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.477939 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.477974 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.478002 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:08Z","lastTransitionTime":"2026-01-26T12:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.581340 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.581442 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.581466 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.581496 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.581519 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:08Z","lastTransitionTime":"2026-01-26T12:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.684264 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.684324 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.684341 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.684366 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.684395 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:08Z","lastTransitionTime":"2026-01-26T12:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.787267 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.787338 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.787358 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.787381 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.787398 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:08Z","lastTransitionTime":"2026-01-26T12:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.891197 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.891283 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.891303 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.891329 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.891348 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:08Z","lastTransitionTime":"2026-01-26T12:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.997381 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.997444 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.997462 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.997489 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:08 crc kubenswrapper[4844]: I0126 12:45:08.997508 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:08Z","lastTransitionTime":"2026-01-26T12:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.101253 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.101337 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.101358 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.101387 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.101409 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:09Z","lastTransitionTime":"2026-01-26T12:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.204716 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.204779 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.204798 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.204822 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.204839 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:09Z","lastTransitionTime":"2026-01-26T12:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.315546 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.315683 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:09 crc kubenswrapper[4844]: E0126 12:45:09.315966 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.316111 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:09 crc kubenswrapper[4844]: E0126 12:45:09.316273 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:09 crc kubenswrapper[4844]: E0126 12:45:09.316363 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.318084 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.318120 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.318135 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.318154 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.318170 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:09Z","lastTransitionTime":"2026-01-26T12:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.318291 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:40:52.343567326 +0000 UTC Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.421054 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.421101 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.421113 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.421132 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.421144 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:09Z","lastTransitionTime":"2026-01-26T12:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.524390 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.524420 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.524427 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.524440 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.524451 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:09Z","lastTransitionTime":"2026-01-26T12:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.626586 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.626960 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.627194 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.627380 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.627577 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:09Z","lastTransitionTime":"2026-01-26T12:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.731148 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.731188 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.731202 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.731218 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.731228 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:09Z","lastTransitionTime":"2026-01-26T12:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.834282 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.834393 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.834470 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.834545 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.834575 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:09Z","lastTransitionTime":"2026-01-26T12:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.936993 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.937057 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.937079 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.937111 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:09 crc kubenswrapper[4844]: I0126 12:45:09.937133 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:09Z","lastTransitionTime":"2026-01-26T12:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.041639 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.041692 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.041714 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.041741 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.041760 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:10Z","lastTransitionTime":"2026-01-26T12:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.144386 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.144435 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.144456 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.144481 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.144497 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:10Z","lastTransitionTime":"2026-01-26T12:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.247080 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.247161 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.247183 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.247211 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.247232 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:10Z","lastTransitionTime":"2026-01-26T12:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.312944 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:10 crc kubenswrapper[4844]: E0126 12:45:10.313154 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.319313 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 04:36:16.324274953 +0000 UTC Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.350933 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.350971 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.350989 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.351012 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.351033 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:10Z","lastTransitionTime":"2026-01-26T12:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.453654 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.453710 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.453724 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.453741 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.453753 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:10Z","lastTransitionTime":"2026-01-26T12:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.556664 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.556725 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.556740 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.556764 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.556785 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:10Z","lastTransitionTime":"2026-01-26T12:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.660086 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.660144 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.660156 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.660179 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.660192 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:10Z","lastTransitionTime":"2026-01-26T12:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.763000 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.763106 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.763355 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.763408 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.763429 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:10Z","lastTransitionTime":"2026-01-26T12:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.866393 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.866492 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.866514 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.866540 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.866557 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:10Z","lastTransitionTime":"2026-01-26T12:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.969949 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.970314 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.970451 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.970631 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:10 crc kubenswrapper[4844]: I0126 12:45:10.970766 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:10Z","lastTransitionTime":"2026-01-26T12:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.076776 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.076838 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.076856 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.076886 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.076905 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:11Z","lastTransitionTime":"2026-01-26T12:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.180256 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.180327 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.180347 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.180376 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.180395 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:11Z","lastTransitionTime":"2026-01-26T12:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.283183 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.283262 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.283280 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.283308 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.283327 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:11Z","lastTransitionTime":"2026-01-26T12:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.312903 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.312961 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.313076 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:11 crc kubenswrapper[4844]: E0126 12:45:11.313285 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:11 crc kubenswrapper[4844]: E0126 12:45:11.313448 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:11 crc kubenswrapper[4844]: E0126 12:45:11.313731 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.319819 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 02:35:17.291540769 +0000 UTC Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.387760 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.387826 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.387857 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.387889 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.387913 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:11Z","lastTransitionTime":"2026-01-26T12:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.490733 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.490773 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.490783 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.490801 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.490812 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:11Z","lastTransitionTime":"2026-01-26T12:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.593819 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.593869 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.593888 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.593911 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.593928 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:11Z","lastTransitionTime":"2026-01-26T12:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.697087 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.697130 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.697142 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.697158 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.697170 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:11Z","lastTransitionTime":"2026-01-26T12:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.813729 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.813778 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.813788 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.813805 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.813816 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:11Z","lastTransitionTime":"2026-01-26T12:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.916474 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.916551 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.916563 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.916584 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:11 crc kubenswrapper[4844]: I0126 12:45:11.916626 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:11Z","lastTransitionTime":"2026-01-26T12:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.019250 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.019295 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.019307 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.019324 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.019337 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:12Z","lastTransitionTime":"2026-01-26T12:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.121583 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.121690 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.121702 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.121719 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.121730 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:12Z","lastTransitionTime":"2026-01-26T12:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.224655 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.224688 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.224696 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.224711 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.224720 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:12Z","lastTransitionTime":"2026-01-26T12:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.313088 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:12 crc kubenswrapper[4844]: E0126 12:45:12.313217 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.313944 4844 scope.go:117] "RemoveContainer" containerID="726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.320730 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 21:53:55.964294314 +0000 UTC Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.328741 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.328906 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.328981 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.329056 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.329129 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:12Z","lastTransitionTime":"2026-01-26T12:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.432108 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.432391 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.432402 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.432416 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.432426 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:12Z","lastTransitionTime":"2026-01-26T12:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.534413 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.534456 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.534467 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.534485 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.534499 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:12Z","lastTransitionTime":"2026-01-26T12:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.637940 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.638018 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.638040 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.638069 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.638092 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:12Z","lastTransitionTime":"2026-01-26T12:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.740346 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.740389 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.740404 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.740425 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.740437 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:12Z","lastTransitionTime":"2026-01-26T12:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.843138 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.843179 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.843187 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.843201 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.843212 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:12Z","lastTransitionTime":"2026-01-26T12:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.950590 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.950704 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.950720 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.950749 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:12 crc kubenswrapper[4844]: I0126 12:45:12.950768 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:12Z","lastTransitionTime":"2026-01-26T12:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.054754 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.054799 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.054812 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.054829 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.054841 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:13Z","lastTransitionTime":"2026-01-26T12:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.158223 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.158299 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.158313 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.158337 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.158354 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:13Z","lastTransitionTime":"2026-01-26T12:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.261736 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.261805 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.261822 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.261846 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.261864 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:13Z","lastTransitionTime":"2026-01-26T12:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.313835 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.313860 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:13 crc kubenswrapper[4844]: E0126 12:45:13.313948 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.314025 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:13 crc kubenswrapper[4844]: E0126 12:45:13.314168 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:13 crc kubenswrapper[4844]: E0126 12:45:13.314301 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.321717 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 11:38:58.021224884 +0000 UTC Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.335069 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.348112 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"012dd78d-465b-41aa-b845-5cd178650e56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a80313396de8bb91760bdf2477da9d233e2387d1ac6addcce62acc4578772c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11638189cf49baf0798a3c7a229b67e05eedf2292d79f884a990a091f21a61c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb28b05d43134d8c4f89d83cd620973c937fd16347910ebf056026f0a3708a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.364579 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.364645 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.364657 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.364674 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.364687 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:13Z","lastTransitionTime":"2026-01-26T12:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.367534 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.385812 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.407045 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.422669 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.436770 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.447522 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.460343 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9be6a90cf1d7f75bb43391968d164c8726b7626d7dc649cd85f10c4d13424ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:57Z\\\",\\\"message\\\":\\\"2026-01-26T12:44:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3f62d99b-10f3-4489-9ddd-fa2f775e6b8e\\\\n2026-01-26T12:44:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3f62d99b-10f3-4489-9ddd-fa2f775e6b8e to /host/opt/cni/bin/\\\\n2026-01-26T12:44:11Z [verbose] multus-daemon started\\\\n2026-01-26T12:44:11Z [verbose] Readiness Indicator file check\\\\n2026-01-26T12:44:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.466752 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.466785 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.466795 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.466819 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.466834 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:13Z","lastTransitionTime":"2026-01-26T12:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.470279 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.484119 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.503974 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:44Z\\\",\\\"message\\\":\\\"none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:43.613387 6501 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0126 12:44:43.613343 6501 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:43.613431 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 12:44:43.613493 6501 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.513535 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.525276 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.537194 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.549890 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.563390 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.568883 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.568920 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.568930 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.568946 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.568957 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:13Z","lastTransitionTime":"2026-01-26T12:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.592546 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.672060 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.672105 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.672118 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.672136 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.672149 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:13Z","lastTransitionTime":"2026-01-26T12:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.773944 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.773999 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.774010 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.774026 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.774035 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:13Z","lastTransitionTime":"2026-01-26T12:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.787169 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovnkube-controller/2.log" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.795281 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerStarted","Data":"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046"} Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.796974 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.808701 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.822853 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9be6a90cf1d7f75bb43391968d164c8726b7626d7dc649cd85f10c4d13424ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:57Z\\\",\\\"message\\\":\\\"2026-01-26T12:44:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3f62d99b-10f3-4489-9ddd-fa2f775e6b8e\\\\n2026-01-26T12:44:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3f62d99b-10f3-4489-9ddd-fa2f775e6b8e to /host/opt/cni/bin/\\\\n2026-01-26T12:44:11Z [verbose] multus-daemon started\\\\n2026-01-26T12:44:11Z [verbose] Readiness Indicator file check\\\\n2026-01-26T12:44:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.836191 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.851667 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.871676 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:44Z\\\",\\\"message\\\":\\\"none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:43.613387 6501 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0126 12:44:43.613343 6501 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:43.613431 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 12:44:43.613493 6501 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.875666 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.875707 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.875717 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.875732 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.875741 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:13Z","lastTransitionTime":"2026-01-26T12:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.889433 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.902898 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.913528 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.926935 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.939042 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.951075 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.963499 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.975122 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.978128 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.978185 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.978204 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.978227 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.978248 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:13Z","lastTransitionTime":"2026-01-26T12:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.985156 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"012dd78d-465b-41aa-b845-5cd178650e56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a80313396de8bb91760bdf2477da9d233e2387d1ac6addcce62acc4578772c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11638189cf49baf0798a3c7a229b67e05eedf2292d79f884a990a091f21a61c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb28b05d43134d8c4f89d83cd620973c937fd16347910ebf056026f0a3708a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:13 crc kubenswrapper[4844]: I0126 12:45:13.995722 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:13Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.006119 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.019498 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.029343 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.081086 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.081171 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.081194 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.081224 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.081249 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:14Z","lastTransitionTime":"2026-01-26T12:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.185961 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.186070 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.186091 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.186119 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.186138 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:14Z","lastTransitionTime":"2026-01-26T12:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.289013 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.289086 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.289105 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.289130 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.289146 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:14Z","lastTransitionTime":"2026-01-26T12:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.312413 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:14 crc kubenswrapper[4844]: E0126 12:45:14.312708 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.322649 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:47:17.898203853 +0000 UTC Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.392027 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.392073 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.392088 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.392104 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.392114 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:14Z","lastTransitionTime":"2026-01-26T12:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.494165 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.494220 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.494231 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.494250 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.494267 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:14Z","lastTransitionTime":"2026-01-26T12:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.597470 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.597546 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.597560 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.597630 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.597644 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:14Z","lastTransitionTime":"2026-01-26T12:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.700293 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.700339 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.700347 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.700360 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.700370 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:14Z","lastTransitionTime":"2026-01-26T12:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.800733 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovnkube-controller/3.log" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.801720 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovnkube-controller/2.log" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.802628 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.802670 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.802682 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.802700 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.802712 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:14Z","lastTransitionTime":"2026-01-26T12:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.805037 4844 generic.go:334] "Generic (PLEG): container finished" podID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerID="564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046" exitCode=1 Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.805090 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerDied","Data":"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046"} Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.805127 4844 scope.go:117] "RemoveContainer" containerID="726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.806498 4844 scope.go:117] "RemoveContainer" containerID="564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046" Jan 26 12:45:14 crc kubenswrapper[4844]: E0126 12:45:14.806852 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.826504 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.841287 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.857918 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9be6a90cf1d7f75bb43391968d164c8726b7626d7dc649cd85f10c4d13424ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:57Z\\\",\\\"message\\\":\\\"2026-01-26T12:44:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3f62d99b-10f3-4489-9ddd-fa2f775e6b8e\\\\n2026-01-26T12:44:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3f62d99b-10f3-4489-9ddd-fa2f775e6b8e to /host/opt/cni/bin/\\\\n2026-01-26T12:44:11Z [verbose] multus-daemon started\\\\n2026-01-26T12:44:11Z [verbose] Readiness Indicator file check\\\\n2026-01-26T12:44:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.870796 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.889568 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.905432 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.905470 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.905483 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.905497 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.905507 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:14Z","lastTransitionTime":"2026-01-26T12:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.912839 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726bf5201f734836c4fb01a9d5a0cb8897f5ec3142e9b54a6acfe5a82a14df5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:44Z\\\",\\\"message\\\":\\\"none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:43.613387 6501 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0126 12:44:43.613343 6501 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 12:44:43.613431 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 12:44:43.613493 6501 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:45:14Z\\\",\\\"message\\\":\\\"icePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},ServicePort{Name:health,Protocol:TCP,Port:8798,TargetPort:{0 8798 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-daemon,},ClusterIP:10.217.4.43,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.43],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0126 12:45:13.850442 6798 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-rlvx4 in node crc\\\\nF0126 12:45:13.850442 6798 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.924379 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.948410 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.963870 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.981122 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:14 crc kubenswrapper[4844]: I0126 12:45:14.995755 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:14Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.008769 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.008811 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.008852 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.008870 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.008884 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:15Z","lastTransitionTime":"2026-01-26T12:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.009934 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.021168 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.036037 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.047187 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"012dd78d-465b-41aa-b845-5cd178650e56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a80313396de8bb91760bdf2477da9d233e2387d1ac6addcce62acc4578772c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11638189cf49baf0798a3c7a229b67e05eedf2292d79f884a990a091f21a61c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb28b05d43134d8c4f89d83cd620973c937fd16347910ebf056026f0a3708a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.057398 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.068949 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.080980 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.110971 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.111004 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.111015 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.111031 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.111043 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:15Z","lastTransitionTime":"2026-01-26T12:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.213337 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.213381 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.213393 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.213410 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.213423 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:15Z","lastTransitionTime":"2026-01-26T12:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.312213 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.312274 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:15 crc kubenswrapper[4844]: E0126 12:45:15.312380 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:15 crc kubenswrapper[4844]: E0126 12:45:15.312651 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.312869 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:15 crc kubenswrapper[4844]: E0126 12:45:15.312994 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.315661 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.315734 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.315758 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.315786 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.315810 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:15Z","lastTransitionTime":"2026-01-26T12:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.322777 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 16:37:39.183534591 +0000 UTC Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.326163 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.418889 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.418963 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.418976 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.419060 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.419080 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:15Z","lastTransitionTime":"2026-01-26T12:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.522110 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.522169 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.522186 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.522209 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.522225 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:15Z","lastTransitionTime":"2026-01-26T12:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.624906 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.625003 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.625016 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.625036 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.625049 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:15Z","lastTransitionTime":"2026-01-26T12:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.727540 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.727903 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.728006 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.728097 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.728182 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:15Z","lastTransitionTime":"2026-01-26T12:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.808874 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovnkube-controller/3.log" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.812400 4844 scope.go:117] "RemoveContainer" containerID="564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046" Jan 26 12:45:15 crc kubenswrapper[4844]: E0126 12:45:15.812541 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.825783 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"012dd78d-465b-41aa-b845-5cd178650e56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a80313396de8bb91760bdf2477da9d233e2387d1ac6addcce62acc4578772c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11638189cf49baf0798a3c7a229b67e05eedf2292d79f884a990a091f21a61c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb28b05d43134d8c4f89d83cd620973c937fd16347910ebf056026f0a3708a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.830616 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.830665 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.830676 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.830692 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.830703 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:15Z","lastTransitionTime":"2026-01-26T12:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.837498 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.851006 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.863471 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.876649 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.886532 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.897854 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.908385 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.918875 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.933320 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.933355 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.933366 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.933379 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.933391 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:15Z","lastTransitionTime":"2026-01-26T12:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.936895 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.952445 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:45:14Z\\\",\\\"message\\\":\\\"icePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},ServicePort{Name:health,Protocol:TCP,Port:8798,TargetPort:{0 8798 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-daemon,},ClusterIP:10.217.4.43,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.43],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0126 12:45:13.850442 6798 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-rlvx4 in node crc\\\\nF0126 12:45:13.850442 6798 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:45:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.961530 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.971009 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6466f12c-90ec-431e-aa05-adb2a00d96c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b15b21f6c49117b7ab33013296dbf71ea8dd0556818a8a4da0a48fcdcbf9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79c1bc1eecc04502fc5b42134d0ce860de5998e1ea84234bc1720b18c9507786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79c1bc1eecc04502fc5b42134d0ce860de5998e1ea84234bc1720b18c9507786\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.981547 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9be6a90cf1d7f75bb43391968d164c8726b7626d7dc649cd85f10c4d13424ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:57Z\\\",\\\"message\\\":\\\"2026-01-26T12:44:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3f62d99b-10f3-4489-9ddd-fa2f775e6b8e\\\\n2026-01-26T12:44:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3f62d99b-10f3-4489-9ddd-fa2f775e6b8e to /host/opt/cni/bin/\\\\n2026-01-26T12:44:11Z [verbose] multus-daemon started\\\\n2026-01-26T12:44:11Z [verbose] Readiness Indicator file check\\\\n2026-01-26T12:44:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:15 crc kubenswrapper[4844]: I0126 12:45:15.991328 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:15Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.001647 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.011399 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.033781 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.035367 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.035451 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.035468 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.035493 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.035509 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:16Z","lastTransitionTime":"2026-01-26T12:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.048500 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.138792 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.138840 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.138864 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.138887 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.138903 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:16Z","lastTransitionTime":"2026-01-26T12:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.241539 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.241638 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.241664 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.241693 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.241713 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:16Z","lastTransitionTime":"2026-01-26T12:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.313142 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:16 crc kubenswrapper[4844]: E0126 12:45:16.313308 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.323500 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:42:07.780301935 +0000 UTC Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.345008 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.345065 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.345078 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.345096 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.345109 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:16Z","lastTransitionTime":"2026-01-26T12:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.447836 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.447929 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.447942 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.447964 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.447976 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:16Z","lastTransitionTime":"2026-01-26T12:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.551173 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.551227 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.551241 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.551264 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.551282 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:16Z","lastTransitionTime":"2026-01-26T12:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.654486 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.654561 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.654584 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.654671 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.654773 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:16Z","lastTransitionTime":"2026-01-26T12:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.756917 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.756986 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.757000 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.757020 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.757037 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:16Z","lastTransitionTime":"2026-01-26T12:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.859689 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.860013 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.860032 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.860055 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.860114 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:16Z","lastTransitionTime":"2026-01-26T12:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.861448 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.861516 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.861539 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.861568 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.861589 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:16Z","lastTransitionTime":"2026-01-26T12:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:16 crc kubenswrapper[4844]: E0126 12:45:16.881715 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.886124 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.886164 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.886177 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.886194 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.886205 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:16Z","lastTransitionTime":"2026-01-26T12:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:16 crc kubenswrapper[4844]: E0126 12:45:16.903180 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.908154 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.908190 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.908204 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.908218 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.908229 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:16Z","lastTransitionTime":"2026-01-26T12:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:16 crc kubenswrapper[4844]: E0126 12:45:16.925365 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.931394 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.931429 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.931439 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.931454 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.931464 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:16Z","lastTransitionTime":"2026-01-26T12:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:16 crc kubenswrapper[4844]: E0126 12:45:16.950680 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.955743 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.955811 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.955829 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.955853 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.955873 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:16Z","lastTransitionTime":"2026-01-26T12:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:16 crc kubenswrapper[4844]: E0126 12:45:16.976410 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T12:45:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ec9310b-463d-4f1d-a480-c21f33e8b459\\\",\\\"systemUUID\\\":\\\"4eb778d6-9226-440d-bd27-0b6f19659b0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:16Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:16 crc kubenswrapper[4844]: E0126 12:45:16.976742 4844 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.978512 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.978568 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.978580 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.978619 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:16 crc kubenswrapper[4844]: I0126 12:45:16.978633 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:16Z","lastTransitionTime":"2026-01-26T12:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.081765 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.081819 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.081835 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.081857 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.081875 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:17Z","lastTransitionTime":"2026-01-26T12:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.184695 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.184755 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.184773 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.184800 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.184867 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:17Z","lastTransitionTime":"2026-01-26T12:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.288247 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.288359 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.288378 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.288402 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.288420 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:17Z","lastTransitionTime":"2026-01-26T12:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.313213 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.313288 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.313494 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:17 crc kubenswrapper[4844]: E0126 12:45:17.313759 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:17 crc kubenswrapper[4844]: E0126 12:45:17.314057 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:17 crc kubenswrapper[4844]: E0126 12:45:17.314181 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.324672 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 06:58:57.411276833 +0000 UTC Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.392569 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.392640 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.392660 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.392680 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.392695 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:17Z","lastTransitionTime":"2026-01-26T12:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.495894 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.495955 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.495972 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.495996 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.496014 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:17Z","lastTransitionTime":"2026-01-26T12:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.599074 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.599129 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.599145 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.599192 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.599210 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:17Z","lastTransitionTime":"2026-01-26T12:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.702057 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.702150 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.702176 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.702208 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.702230 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:17Z","lastTransitionTime":"2026-01-26T12:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.810634 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.810705 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.810725 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.810797 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.810822 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:17Z","lastTransitionTime":"2026-01-26T12:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.914564 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.914692 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.914714 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.914738 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:17 crc kubenswrapper[4844]: I0126 12:45:17.914756 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:17Z","lastTransitionTime":"2026-01-26T12:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.017106 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.017149 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.017160 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.017175 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.017187 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:18Z","lastTransitionTime":"2026-01-26T12:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.120468 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.120558 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.120575 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.120634 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.120652 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:18Z","lastTransitionTime":"2026-01-26T12:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.223749 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.223825 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.223844 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.223872 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.223893 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:18Z","lastTransitionTime":"2026-01-26T12:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.313063 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:18 crc kubenswrapper[4844]: E0126 12:45:18.313317 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.325436 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:23:21.705632702 +0000 UTC Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.327579 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.327691 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.327889 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.327928 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.327952 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:18Z","lastTransitionTime":"2026-01-26T12:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.430614 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.430641 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.430648 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.430660 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.430668 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:18Z","lastTransitionTime":"2026-01-26T12:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.533030 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.533064 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.533074 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.533089 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.533100 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:18Z","lastTransitionTime":"2026-01-26T12:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.635993 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.636136 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.636157 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.636186 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.636208 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:18Z","lastTransitionTime":"2026-01-26T12:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.739553 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.739652 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.739669 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.739691 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.739703 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:18Z","lastTransitionTime":"2026-01-26T12:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.843442 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.843493 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.843504 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.843523 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.843537 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:18Z","lastTransitionTime":"2026-01-26T12:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.947152 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.947224 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.947237 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.947266 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:18 crc kubenswrapper[4844]: I0126 12:45:18.947285 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:18Z","lastTransitionTime":"2026-01-26T12:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.049250 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.049302 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.049317 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.049337 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.049352 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:19Z","lastTransitionTime":"2026-01-26T12:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.151951 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.151983 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.151991 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.152003 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.152012 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:19Z","lastTransitionTime":"2026-01-26T12:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.254704 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.254747 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.254756 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.254769 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.254778 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:19Z","lastTransitionTime":"2026-01-26T12:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.312773 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.312794 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.312870 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:19 crc kubenswrapper[4844]: E0126 12:45:19.313080 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:19 crc kubenswrapper[4844]: E0126 12:45:19.313150 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:19 crc kubenswrapper[4844]: E0126 12:45:19.313245 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.325551 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 16:23:39.293863413 +0000 UTC Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.358432 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.358485 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.358502 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.358528 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.358547 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:19Z","lastTransitionTime":"2026-01-26T12:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.461360 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.461404 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.461440 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.461458 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.461470 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:19Z","lastTransitionTime":"2026-01-26T12:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.564007 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.564058 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.564073 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.564094 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.564108 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:19Z","lastTransitionTime":"2026-01-26T12:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.666800 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.666862 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.666878 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.666901 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.666921 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:19Z","lastTransitionTime":"2026-01-26T12:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.769538 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.769677 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.769695 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.769717 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.769734 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:19Z","lastTransitionTime":"2026-01-26T12:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.872874 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.872920 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.872930 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.872943 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.872953 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:19Z","lastTransitionTime":"2026-01-26T12:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.975361 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.975406 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.975421 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.975440 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:19 crc kubenswrapper[4844]: I0126 12:45:19.975456 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:19Z","lastTransitionTime":"2026-01-26T12:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.077560 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.077589 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.077613 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.077625 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.077633 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:20Z","lastTransitionTime":"2026-01-26T12:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.180163 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.180206 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.180220 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.180236 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.180247 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:20Z","lastTransitionTime":"2026-01-26T12:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.282502 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.282557 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.282579 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.282651 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.282681 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:20Z","lastTransitionTime":"2026-01-26T12:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.312946 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:20 crc kubenswrapper[4844]: E0126 12:45:20.313212 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.326368 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 04:04:42.561355261 +0000 UTC Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.385986 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.386065 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.386087 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.386112 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.386132 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:20Z","lastTransitionTime":"2026-01-26T12:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.493647 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.493692 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.493704 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.493721 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.493733 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:20Z","lastTransitionTime":"2026-01-26T12:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.596317 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.596375 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.596385 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.596400 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.596412 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:20Z","lastTransitionTime":"2026-01-26T12:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.699589 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.699686 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.699701 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.699720 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.699735 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:20Z","lastTransitionTime":"2026-01-26T12:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.802985 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.803055 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.803073 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.803096 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.803115 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:20Z","lastTransitionTime":"2026-01-26T12:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.905760 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.905813 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.905828 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.905846 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:20 crc kubenswrapper[4844]: I0126 12:45:20.905861 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:20Z","lastTransitionTime":"2026-01-26T12:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.007973 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.008008 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.008016 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.008029 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.008038 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:21Z","lastTransitionTime":"2026-01-26T12:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.110496 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.110558 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.110574 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.110652 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.110670 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:21Z","lastTransitionTime":"2026-01-26T12:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.213815 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.213885 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.213904 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.213928 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.213942 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:21Z","lastTransitionTime":"2026-01-26T12:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.313053 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.313127 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.313149 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:21 crc kubenswrapper[4844]: E0126 12:45:21.313274 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:21 crc kubenswrapper[4844]: E0126 12:45:21.313333 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:21 crc kubenswrapper[4844]: E0126 12:45:21.313404 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.315946 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.315979 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.315990 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.316005 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.316017 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:21Z","lastTransitionTime":"2026-01-26T12:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.327422 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:15:00.186645962 +0000 UTC Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.418958 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.419024 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.419048 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.419076 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.419097 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:21Z","lastTransitionTime":"2026-01-26T12:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.522057 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.522126 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.522148 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.522180 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.522204 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:21Z","lastTransitionTime":"2026-01-26T12:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.624767 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.624815 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.624826 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.624845 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.624856 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:21Z","lastTransitionTime":"2026-01-26T12:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.727401 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.727482 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.727507 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.727538 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.727560 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:21Z","lastTransitionTime":"2026-01-26T12:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.830258 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.830325 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.830344 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.830367 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.830385 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:21Z","lastTransitionTime":"2026-01-26T12:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.932638 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.932689 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.932698 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.932713 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:21 crc kubenswrapper[4844]: I0126 12:45:21.932726 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:21Z","lastTransitionTime":"2026-01-26T12:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.034885 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.034943 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.035001 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.035042 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.035072 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:22Z","lastTransitionTime":"2026-01-26T12:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.137540 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.137588 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.137624 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.137645 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.137658 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:22Z","lastTransitionTime":"2026-01-26T12:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.241676 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.241730 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.241750 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.241785 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.241807 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:22Z","lastTransitionTime":"2026-01-26T12:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.312771 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:22 crc kubenswrapper[4844]: E0126 12:45:22.313498 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.328250 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 16:52:36.316953971 +0000 UTC Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.344950 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.344988 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.344998 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.345015 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.345026 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:22Z","lastTransitionTime":"2026-01-26T12:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.447905 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.447949 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.447963 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.447979 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.447991 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:22Z","lastTransitionTime":"2026-01-26T12:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.550015 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.550077 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.550094 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.550116 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.550132 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:22Z","lastTransitionTime":"2026-01-26T12:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.653447 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.653495 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.653510 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.653531 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.653548 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:22Z","lastTransitionTime":"2026-01-26T12:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.756040 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.756067 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.756074 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.756086 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.756094 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:22Z","lastTransitionTime":"2026-01-26T12:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.858473 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.858508 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.858517 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.858529 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.858538 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:22Z","lastTransitionTime":"2026-01-26T12:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.961834 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.961892 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.961912 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.961935 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:22 crc kubenswrapper[4844]: I0126 12:45:22.961955 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:22Z","lastTransitionTime":"2026-01-26T12:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.064358 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.064431 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.064456 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.064487 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.064577 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:23Z","lastTransitionTime":"2026-01-26T12:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.167362 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.167459 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.167483 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.167509 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.167526 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:23Z","lastTransitionTime":"2026-01-26T12:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.271226 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.271293 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.271315 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.271343 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.271365 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:23Z","lastTransitionTime":"2026-01-26T12:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.313194 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:23 crc kubenswrapper[4844]: E0126 12:45:23.313357 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.313825 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:23 crc kubenswrapper[4844]: E0126 12:45:23.313937 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.314064 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:23 crc kubenswrapper[4844]: E0126 12:45:23.314253 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.329374 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:55:53.861996615 +0000 UTC Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.335279 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd60ce73390532e974f85d610708534f2ab6bcc38e93c54516cb783aca1bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.350104 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-94bpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14600b66-6352-4f5e-9c09-eb2548503555\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1eb448e6788cdb87cd78364d442623294abe1263dbedbf9d15e8ca77c06ced2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-456bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94bpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.365589 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04a4b371-44a9-4805-b60f-6f7ba0fac40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916da455e82003f3effd3be11a50a90b25232fc7d11d06285e8902a0a3cfd10e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b598bd3381ec5062c126c04857c188ab29afc34c39ec94a2cd95b306cdfd00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7ckr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5qpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.374727 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.374807 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.374832 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.374861 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.374886 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:23Z","lastTransitionTime":"2026-01-26T12:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.384255 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69496f6-7f67-4cca-9c9f-420e5567b165\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxt6m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxnj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.420992 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2c6313f-dd44-4db9-a53a-63c8a42efe6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b82658e4e6ddbf4e3a16f9d569c5cef6a683d401d79b7fa7f55aad8c70e8254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ffcb99cad7e5aff2bbe4bdf3c0a2365dd91fb40b2382e05ab1f422c6ea2b26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f49e7dd36dc88342bc2e3bc43fc5770fb43572151945cba3867f9ffd7fa451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b51b34a870f5c4045729aa5ee5d5fb85ebd58fd55a25c5b6effe0ea75ea8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9118772906fa09f9868a34942401b399a842e6718caa3d82b78b2974e1011cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa0e17b53055bd229c52c53b69e491e0789569e73004101378a34609888bbbc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a99b81e3524dfff792c3e9a274f2460edfe7c29fb27c556fa1c7e427178ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd347949870be709220badbf116b4ed8ff9db3962a03361e42f5ff1c61eb5ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.440976 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ea4f5b3-1307-4259-8ee3-1de62370d8ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://943771df5941e9c5a48c2d8ee946cc9ac1ee7b00855dfe531fbce9f76450dd36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e62359ee71a87d75c73a16de09de5ea48d09e4cf2041f434f38f97002a5f8eee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1fb5af67b83d9ecc627cbfcc14d011b4b11a5c4652a35c1bd1df6f03ab71f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.459359 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"012dd78d-465b-41aa-b845-5cd178650e56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a80313396de8bb91760bdf2477da9d233e2387d1ac6addcce62acc4578772c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11638189cf49baf0798a3c7a229b67e05eedf2292d79f884a990a091f21a61c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb28b05d43134d8c4f89d83cd620973c937fd16347910ebf056026f0a3708a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a49b5118144d900070aa5f3fccea16c73f8e2451f3636fdab388c957d4822e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.478475 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.478574 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.478661 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.478687 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.478705 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:23Z","lastTransitionTime":"2026-01-26T12:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.478991 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.498343 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708e5cab2c0d22cab1f1e5c02d0f8afbdc0fcdccc9a56b663f365bfa4c7f8709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61077567553b104f052a133a6227c3a887430e971721e7998259e7e7ea7edf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.518710 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.539841 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecfc1fc-7b8c-42f4-9a7b-058a0acc9534\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.558847 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6cdfcbb27c9a8c515f38e43d4813f25d2d3781a322e424e09ab047259bd0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.577141 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.582368 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.582427 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.582445 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.582472 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.582492 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:23Z","lastTransitionTime":"2026-01-26T12:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.599238 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zb9kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"467433a4-64be-4a14-beb2-657370e9865f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9be6a90cf1d7f75bb43391968d164c8726b7626d7dc649cd85f10c4d13424ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:44:57Z\\\",\\\"message\\\":\\\"2026-01-26T12:44:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3f62d99b-10f3-4489-9ddd-fa2f775e6b8e\\\\n2026-01-26T12:44:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3f62d99b-10f3-4489-9ddd-fa2f775e6b8e to /host/opt/cni/bin/\\\\n2026-01-26T12:44:11Z [verbose] multus-daemon started\\\\n2026-01-26T12:44:11Z [verbose] Readiness Indicator file check\\\\n2026-01-26T12:44:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v76sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zb9kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.619950 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3602fc7-397b-4d73-ab0c-45acc047397b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25282e97c34daf17d2fdac6ba7c074c611ec8df85288f5753bf98a3c1add5afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29xcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j7r9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.644486 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0ad2def-b040-48db-be8a-19f66df2c0f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a065fe1dc7d374bbe86c5012d0f224285e08e6b38a8eeb9fcdc76d684162934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://489205a3983f27152122db352c7a35ccdb12f14bb7b4c2bf3642f5c99699d745\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3c4c5df7dc10f646a7fdcf0d70daccac7a5bfc3ec9772a4b6d9aa4305dcc5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74838e6328e419da6cc94d4f2781197e94cbf74aaf06d88c6d0e39eda967fede\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4821615afe8834b864a0c33ad5733eef0b5247da50e1e945cab1c75f9828aa1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c800e6a3e1513d6bc5ab5a54c8d918f2c81b5a4d2631fc36f5f7a29fcda3d220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://810435fd1610f5f81dc70f7c0d455c11b414c37e9e392dec8eb3b4668c7820d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qn7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6ttt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.676879 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"348a2956-fe61-43b9-858f-ab9c97a2985b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T12:45:14Z\\\",\\\"message\\\":\\\"icePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},ServicePort{Name:health,Protocol:TCP,Port:8798,TargetPort:{0 8798 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-daemon,},ClusterIP:10.217.4.43,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.43],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0126 12:45:13.850442 6798 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-rlvx4 in node crc\\\\nF0126 12:45:13.850442 6798 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T12:45:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:44:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvtf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rlvx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.684783 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.684839 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.684859 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.684885 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.684902 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:23Z","lastTransitionTime":"2026-01-26T12:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.693684 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wd9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046bb01b-89ef-40e9-bbbd-83b5f2d2cf96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e97a7a3de02627680f94659705ad21fb73a76a1c11f14ebff8ec48ef204ea753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h4zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:44:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wd9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.711131 4844 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6466f12c-90ec-431e-aa05-adb2a00d96c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T12:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b15b21f6c49117b7ab33013296dbf71ea8dd0556818a8a4da0a48fcdcbf9094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79c1bc1eecc04502fc5b42134d0ce860de5998e1ea84234bc1720b18c9507786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79c1bc1eecc04502fc5b42134d0ce860de5998e1ea84234bc1720b18c9507786\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T12:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T12:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T12:43:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T12:45:23Z is after 2025-08-24T17:21:41Z" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.787672 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.787738 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.787757 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.787782 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.787803 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:23Z","lastTransitionTime":"2026-01-26T12:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.891588 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.891645 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.891657 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.891673 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.891686 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:23Z","lastTransitionTime":"2026-01-26T12:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.994734 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.995128 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.995280 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.995425 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:23 crc kubenswrapper[4844]: I0126 12:45:23.995567 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:23Z","lastTransitionTime":"2026-01-26T12:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.100308 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.100361 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.100380 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.100407 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.100429 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:24Z","lastTransitionTime":"2026-01-26T12:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.203551 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.204006 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.204154 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.204304 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.204427 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:24Z","lastTransitionTime":"2026-01-26T12:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.307857 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.307917 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.307936 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.307959 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.307974 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:24Z","lastTransitionTime":"2026-01-26T12:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.312811 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:24 crc kubenswrapper[4844]: E0126 12:45:24.313186 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.330309 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:12:00.825634793 +0000 UTC Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.411087 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.411157 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.411173 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.411189 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.411201 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:24Z","lastTransitionTime":"2026-01-26T12:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.514579 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.514696 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.514759 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.514791 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.514849 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:24Z","lastTransitionTime":"2026-01-26T12:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.618031 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.618096 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.618115 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.618155 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.618195 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:24Z","lastTransitionTime":"2026-01-26T12:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.721616 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.721682 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.721706 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.721736 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.721757 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:24Z","lastTransitionTime":"2026-01-26T12:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.825274 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.825370 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.825393 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.825428 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.825450 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:24Z","lastTransitionTime":"2026-01-26T12:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.928233 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.928277 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.928289 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.928305 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:24 crc kubenswrapper[4844]: I0126 12:45:24.928317 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:24Z","lastTransitionTime":"2026-01-26T12:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.031852 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.031920 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.031936 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.031959 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.031980 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:25Z","lastTransitionTime":"2026-01-26T12:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.135771 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.135838 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.135860 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.135884 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.135904 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:25Z","lastTransitionTime":"2026-01-26T12:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.239222 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.239275 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.239303 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.239327 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.239341 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:25Z","lastTransitionTime":"2026-01-26T12:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.313387 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.313562 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.313653 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:25 crc kubenswrapper[4844]: E0126 12:45:25.313931 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:25 crc kubenswrapper[4844]: E0126 12:45:25.314082 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:25 crc kubenswrapper[4844]: E0126 12:45:25.314140 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.331324 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:07:25.420540438 +0000 UTC Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.341999 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.342062 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.342076 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.342097 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.342112 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:25Z","lastTransitionTime":"2026-01-26T12:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.445382 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.445442 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.445454 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.445474 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.445489 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:25Z","lastTransitionTime":"2026-01-26T12:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.549250 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.549330 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.549354 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.549385 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.549408 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:25Z","lastTransitionTime":"2026-01-26T12:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.652813 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.652946 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.652963 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.652983 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.652997 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:25Z","lastTransitionTime":"2026-01-26T12:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.756320 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.756420 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.756433 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.756450 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.756461 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:25Z","lastTransitionTime":"2026-01-26T12:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.858948 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.859012 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.859035 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.859074 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.859096 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:25Z","lastTransitionTime":"2026-01-26T12:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.962869 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.962955 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.962976 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.963006 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:25 crc kubenswrapper[4844]: I0126 12:45:25.963029 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:25Z","lastTransitionTime":"2026-01-26T12:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.066419 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.066481 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.066498 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.066520 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.066537 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:26Z","lastTransitionTime":"2026-01-26T12:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.169817 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.169857 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.169865 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.169882 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.169894 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:26Z","lastTransitionTime":"2026-01-26T12:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.272587 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.272665 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.272676 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.272721 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.272740 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:26Z","lastTransitionTime":"2026-01-26T12:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.312760 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:26 crc kubenswrapper[4844]: E0126 12:45:26.312882 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.315273 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs\") pod \"network-metrics-daemon-gxnj7\" (UID: \"c69496f6-7f67-4cca-9c9f-420e5567b165\") " pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:26 crc kubenswrapper[4844]: E0126 12:45:26.315450 4844 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:45:26 crc kubenswrapper[4844]: E0126 12:45:26.315511 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs podName:c69496f6-7f67-4cca-9c9f-420e5567b165 nodeName:}" failed. No retries permitted until 2026-01-26 12:46:30.315496227 +0000 UTC m=+167.248863829 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs") pod "network-metrics-daemon-gxnj7" (UID: "c69496f6-7f67-4cca-9c9f-420e5567b165") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.332301 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:47:19.33294797 +0000 UTC Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.375251 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.375297 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.375308 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.375324 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.375342 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:26Z","lastTransitionTime":"2026-01-26T12:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.478948 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.479038 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.479058 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.479084 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.479103 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:26Z","lastTransitionTime":"2026-01-26T12:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.582677 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.582768 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.582786 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.582819 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.582840 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:26Z","lastTransitionTime":"2026-01-26T12:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.687333 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.687433 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.687490 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.687535 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.687563 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:26Z","lastTransitionTime":"2026-01-26T12:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.791072 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.791318 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.791336 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.791361 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.791379 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:26Z","lastTransitionTime":"2026-01-26T12:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.895468 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.895513 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.895526 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.895545 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.895558 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:26Z","lastTransitionTime":"2026-01-26T12:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.999700 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.999782 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.999800 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:26 crc kubenswrapper[4844]: I0126 12:45:26.999826 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:26.999844 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:26Z","lastTransitionTime":"2026-01-26T12:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.038907 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.038969 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.038987 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.039013 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.039032 4844 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T12:45:27Z","lastTransitionTime":"2026-01-26T12:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.134147 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2"] Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.134575 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.142997 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.143056 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.143264 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.147359 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.187933 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=86.187902798 podStartE2EDuration="1m26.187902798s" podCreationTimestamp="2026-01-26 12:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:45:27.186231456 +0000 UTC m=+104.119599098" watchObservedRunningTime="2026-01-26 12:45:27.187902798 +0000 UTC m=+104.121270440" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.225587 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ecb63734-1b43-45c0-a744-1207fb4d86f9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xhbd2\" (UID: \"ecb63734-1b43-45c0-a744-1207fb4d86f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.225951 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecb63734-1b43-45c0-a744-1207fb4d86f9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xhbd2\" (UID: \"ecb63734-1b43-45c0-a744-1207fb4d86f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.226147 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ecb63734-1b43-45c0-a744-1207fb4d86f9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xhbd2\" (UID: \"ecb63734-1b43-45c0-a744-1207fb4d86f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.226246 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ecb63734-1b43-45c0-a744-1207fb4d86f9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xhbd2\" (UID: \"ecb63734-1b43-45c0-a744-1207fb4d86f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.226283 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecb63734-1b43-45c0-a744-1207fb4d86f9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xhbd2\" (UID: \"ecb63734-1b43-45c0-a744-1207fb4d86f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.235465 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-94bpf" podStartSLOduration=80.235438915 podStartE2EDuration="1m20.235438915s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:45:27.235372724 +0000 UTC m=+104.168740386" watchObservedRunningTime="2026-01-26 12:45:27.235438915 +0000 UTC m=+104.168806537" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.255786 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5qpr8" podStartSLOduration=80.255753933 podStartE2EDuration="1m20.255753933s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:45:27.254529831 +0000 UTC m=+104.187897463" watchObservedRunningTime="2026-01-26 12:45:27.255753933 +0000 UTC m=+104.189121575" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.298227 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=86.298177155 podStartE2EDuration="1m26.298177155s" podCreationTimestamp="2026-01-26 12:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:45:27.298131324 +0000 UTC m=+104.231498966" watchObservedRunningTime="2026-01-26 12:45:27.298177155 +0000 UTC m=+104.231544847" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.312147 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:27 crc kubenswrapper[4844]: E0126 12:45:27.312272 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.312304 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.312334 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:27 crc kubenswrapper[4844]: E0126 12:45:27.312384 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:27 crc kubenswrapper[4844]: E0126 12:45:27.312428 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.312994 4844 scope.go:117] "RemoveContainer" containerID="564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046" Jan 26 12:45:27 crc kubenswrapper[4844]: E0126 12:45:27.313125 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.319875 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=79.31984564 podStartE2EDuration="1m19.31984564s" podCreationTimestamp="2026-01-26 12:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:45:27.319301989 +0000 UTC m=+104.252669641" watchObservedRunningTime="2026-01-26 12:45:27.31984564 +0000 UTC m=+104.253213262" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.327487 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecb63734-1b43-45c0-a744-1207fb4d86f9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xhbd2\" (UID: \"ecb63734-1b43-45c0-a744-1207fb4d86f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.327532 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ecb63734-1b43-45c0-a744-1207fb4d86f9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xhbd2\" (UID: \"ecb63734-1b43-45c0-a744-1207fb4d86f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.327558 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ecb63734-1b43-45c0-a744-1207fb4d86f9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xhbd2\" (UID: \"ecb63734-1b43-45c0-a744-1207fb4d86f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.327576 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecb63734-1b43-45c0-a744-1207fb4d86f9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xhbd2\" (UID: \"ecb63734-1b43-45c0-a744-1207fb4d86f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.327620 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ecb63734-1b43-45c0-a744-1207fb4d86f9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xhbd2\" (UID: \"ecb63734-1b43-45c0-a744-1207fb4d86f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.327693 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ecb63734-1b43-45c0-a744-1207fb4d86f9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xhbd2\" (UID: \"ecb63734-1b43-45c0-a744-1207fb4d86f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.328656 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ecb63734-1b43-45c0-a744-1207fb4d86f9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xhbd2\" (UID: \"ecb63734-1b43-45c0-a744-1207fb4d86f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.328720 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ecb63734-1b43-45c0-a744-1207fb4d86f9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xhbd2\" (UID: \"ecb63734-1b43-45c0-a744-1207fb4d86f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.332617 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 07:52:06.116419002 +0000 UTC Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.332706 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.335554 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecb63734-1b43-45c0-a744-1207fb4d86f9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xhbd2\" (UID: \"ecb63734-1b43-45c0-a744-1207fb4d86f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.351730 4844 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.358307 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecb63734-1b43-45c0-a744-1207fb4d86f9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xhbd2\" (UID: \"ecb63734-1b43-45c0-a744-1207fb4d86f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.384499 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=56.384477276 podStartE2EDuration="56.384477276s" podCreationTimestamp="2026-01-26 12:44:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:45:27.341074646 +0000 UTC m=+104.274442258" watchObservedRunningTime="2026-01-26 12:45:27.384477276 +0000 UTC m=+104.317844898" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.464144 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.496115 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=12.496093199 podStartE2EDuration="12.496093199s" podCreationTimestamp="2026-01-26 12:45:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:45:27.496047158 +0000 UTC m=+104.429414770" watchObservedRunningTime="2026-01-26 12:45:27.496093199 +0000 UTC m=+104.429460811" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.512638 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-zb9kx" podStartSLOduration=80.512617677 podStartE2EDuration="1m20.512617677s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:45:27.512435293 +0000 UTC m=+104.445802905" watchObservedRunningTime="2026-01-26 12:45:27.512617677 +0000 UTC m=+104.445985279" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.532887 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podStartSLOduration=80.532844715 podStartE2EDuration="1m20.532844715s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:45:27.530944909 +0000 UTC m=+104.464312521" watchObservedRunningTime="2026-01-26 12:45:27.532844715 +0000 UTC m=+104.466212317" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.549157 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-f6ttt" podStartSLOduration=80.549140109 podStartE2EDuration="1m20.549140109s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:45:27.548223882 +0000 UTC m=+104.481591494" watchObservedRunningTime="2026-01-26 12:45:27.549140109 +0000 UTC m=+104.482507721" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.579127 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7wd9k" podStartSLOduration=80.579111068 podStartE2EDuration="1m20.579111068s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:45:27.578981106 +0000 UTC m=+104.512348718" watchObservedRunningTime="2026-01-26 12:45:27.579111068 +0000 UTC m=+104.512478680" Jan 26 12:45:27 crc kubenswrapper[4844]: I0126 12:45:27.855344 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" event={"ID":"ecb63734-1b43-45c0-a744-1207fb4d86f9","Type":"ContainerStarted","Data":"6f057582d659e8c6cbe0aa5c54b14be3903b4d5b2d0ac838534611121cd43f8d"} Jan 26 12:45:28 crc kubenswrapper[4844]: I0126 12:45:28.312666 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:28 crc kubenswrapper[4844]: E0126 12:45:28.312897 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:28 crc kubenswrapper[4844]: I0126 12:45:28.859685 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" event={"ID":"ecb63734-1b43-45c0-a744-1207fb4d86f9","Type":"ContainerStarted","Data":"b4c3bbb7c3b89d596ce79fa96d9575aad3fc14ffc9de0b11cb4d34fe7a25d605"} Jan 26 12:45:29 crc kubenswrapper[4844]: I0126 12:45:29.312748 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:29 crc kubenswrapper[4844]: I0126 12:45:29.312848 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:29 crc kubenswrapper[4844]: E0126 12:45:29.312924 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:29 crc kubenswrapper[4844]: I0126 12:45:29.313007 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:29 crc kubenswrapper[4844]: E0126 12:45:29.313108 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:29 crc kubenswrapper[4844]: E0126 12:45:29.313155 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:30 crc kubenswrapper[4844]: I0126 12:45:30.312475 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:30 crc kubenswrapper[4844]: E0126 12:45:30.312639 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:31 crc kubenswrapper[4844]: I0126 12:45:31.312124 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:31 crc kubenswrapper[4844]: I0126 12:45:31.312230 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:31 crc kubenswrapper[4844]: E0126 12:45:31.312273 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:31 crc kubenswrapper[4844]: E0126 12:45:31.312465 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:31 crc kubenswrapper[4844]: I0126 12:45:31.312592 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:31 crc kubenswrapper[4844]: E0126 12:45:31.312814 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:32 crc kubenswrapper[4844]: I0126 12:45:32.312782 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:32 crc kubenswrapper[4844]: E0126 12:45:32.312943 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:33 crc kubenswrapper[4844]: I0126 12:45:33.312532 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:33 crc kubenswrapper[4844]: I0126 12:45:33.312669 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:33 crc kubenswrapper[4844]: E0126 12:45:33.314431 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:33 crc kubenswrapper[4844]: I0126 12:45:33.314620 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:33 crc kubenswrapper[4844]: E0126 12:45:33.314683 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:33 crc kubenswrapper[4844]: E0126 12:45:33.314989 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:34 crc kubenswrapper[4844]: I0126 12:45:34.312117 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:34 crc kubenswrapper[4844]: E0126 12:45:34.312271 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:35 crc kubenswrapper[4844]: I0126 12:45:35.312612 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:35 crc kubenswrapper[4844]: I0126 12:45:35.312771 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:35 crc kubenswrapper[4844]: E0126 12:45:35.312807 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:35 crc kubenswrapper[4844]: E0126 12:45:35.312967 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:35 crc kubenswrapper[4844]: I0126 12:45:35.313092 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:35 crc kubenswrapper[4844]: E0126 12:45:35.313190 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:36 crc kubenswrapper[4844]: I0126 12:45:36.313193 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:36 crc kubenswrapper[4844]: E0126 12:45:36.313355 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:37 crc kubenswrapper[4844]: I0126 12:45:37.312843 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:37 crc kubenswrapper[4844]: I0126 12:45:37.312915 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:37 crc kubenswrapper[4844]: E0126 12:45:37.313011 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:37 crc kubenswrapper[4844]: I0126 12:45:37.313063 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:37 crc kubenswrapper[4844]: E0126 12:45:37.313209 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:37 crc kubenswrapper[4844]: E0126 12:45:37.313422 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:38 crc kubenswrapper[4844]: I0126 12:45:38.313122 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:38 crc kubenswrapper[4844]: E0126 12:45:38.313400 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:39 crc kubenswrapper[4844]: I0126 12:45:39.312416 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:39 crc kubenswrapper[4844]: I0126 12:45:39.312571 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:39 crc kubenswrapper[4844]: E0126 12:45:39.312754 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:39 crc kubenswrapper[4844]: I0126 12:45:39.313053 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:39 crc kubenswrapper[4844]: E0126 12:45:39.313062 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:39 crc kubenswrapper[4844]: E0126 12:45:39.313628 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:40 crc kubenswrapper[4844]: I0126 12:45:40.312711 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:40 crc kubenswrapper[4844]: E0126 12:45:40.313794 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:41 crc kubenswrapper[4844]: I0126 12:45:41.312682 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:41 crc kubenswrapper[4844]: I0126 12:45:41.312757 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:41 crc kubenswrapper[4844]: E0126 12:45:41.312892 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:41 crc kubenswrapper[4844]: E0126 12:45:41.313034 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:41 crc kubenswrapper[4844]: I0126 12:45:41.313762 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:41 crc kubenswrapper[4844]: E0126 12:45:41.314552 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:42 crc kubenswrapper[4844]: I0126 12:45:42.312815 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:42 crc kubenswrapper[4844]: E0126 12:45:42.312995 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:42 crc kubenswrapper[4844]: I0126 12:45:42.314167 4844 scope.go:117] "RemoveContainer" containerID="564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046" Jan 26 12:45:42 crc kubenswrapper[4844]: E0126 12:45:42.314470 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rlvx4_openshift-ovn-kubernetes(348a2956-fe61-43b9-858f-ab9c97a2985b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" Jan 26 12:45:43 crc kubenswrapper[4844]: E0126 12:45:43.284680 4844 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 26 12:45:43 crc kubenswrapper[4844]: I0126 12:45:43.313019 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:43 crc kubenswrapper[4844]: I0126 12:45:43.313053 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:43 crc kubenswrapper[4844]: I0126 12:45:43.314270 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:43 crc kubenswrapper[4844]: E0126 12:45:43.314245 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:43 crc kubenswrapper[4844]: E0126 12:45:43.314464 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:43 crc kubenswrapper[4844]: E0126 12:45:43.314505 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:43 crc kubenswrapper[4844]: E0126 12:45:43.396739 4844 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 12:45:44 crc kubenswrapper[4844]: I0126 12:45:44.312535 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:44 crc kubenswrapper[4844]: E0126 12:45:44.312882 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:45 crc kubenswrapper[4844]: I0126 12:45:45.312905 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:45 crc kubenswrapper[4844]: I0126 12:45:45.313046 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:45 crc kubenswrapper[4844]: E0126 12:45:45.313090 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:45 crc kubenswrapper[4844]: E0126 12:45:45.313230 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:45 crc kubenswrapper[4844]: I0126 12:45:45.312941 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:45 crc kubenswrapper[4844]: E0126 12:45:45.313410 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:46 crc kubenswrapper[4844]: I0126 12:45:46.313133 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:46 crc kubenswrapper[4844]: E0126 12:45:46.313399 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:46 crc kubenswrapper[4844]: I0126 12:45:46.930114 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zb9kx_467433a4-64be-4a14-beb2-657370e9865f/kube-multus/1.log" Jan 26 12:45:46 crc kubenswrapper[4844]: I0126 12:45:46.930872 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zb9kx_467433a4-64be-4a14-beb2-657370e9865f/kube-multus/0.log" Jan 26 12:45:46 crc kubenswrapper[4844]: I0126 12:45:46.930948 4844 generic.go:334] "Generic (PLEG): container finished" podID="467433a4-64be-4a14-beb2-657370e9865f" containerID="9be6a90cf1d7f75bb43391968d164c8726b7626d7dc649cd85f10c4d13424ab9" exitCode=1 Jan 26 12:45:46 crc kubenswrapper[4844]: I0126 12:45:46.930989 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zb9kx" event={"ID":"467433a4-64be-4a14-beb2-657370e9865f","Type":"ContainerDied","Data":"9be6a90cf1d7f75bb43391968d164c8726b7626d7dc649cd85f10c4d13424ab9"} Jan 26 12:45:46 crc kubenswrapper[4844]: I0126 12:45:46.931036 4844 scope.go:117] "RemoveContainer" containerID="9a01598dd996bd69b470e2e4833ea4231bc77f0305a3a7fec3f70b8e2b8f01cb" Jan 26 12:45:46 crc kubenswrapper[4844]: I0126 12:45:46.931652 4844 scope.go:117] "RemoveContainer" containerID="9be6a90cf1d7f75bb43391968d164c8726b7626d7dc649cd85f10c4d13424ab9" Jan 26 12:45:46 crc kubenswrapper[4844]: E0126 12:45:46.932033 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-zb9kx_openshift-multus(467433a4-64be-4a14-beb2-657370e9865f)\"" pod="openshift-multus/multus-zb9kx" podUID="467433a4-64be-4a14-beb2-657370e9865f" Jan 26 12:45:46 crc kubenswrapper[4844]: I0126 12:45:46.958762 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xhbd2" podStartSLOduration=99.958723598 podStartE2EDuration="1m39.958723598s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:45:28.879258222 +0000 UTC m=+105.812625834" watchObservedRunningTime="2026-01-26 12:45:46.958723598 +0000 UTC m=+123.892091210" Jan 26 12:45:47 crc kubenswrapper[4844]: I0126 12:45:47.312583 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:47 crc kubenswrapper[4844]: I0126 12:45:47.312710 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:47 crc kubenswrapper[4844]: E0126 12:45:47.313446 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:47 crc kubenswrapper[4844]: I0126 12:45:47.312724 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:47 crc kubenswrapper[4844]: E0126 12:45:47.313737 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:47 crc kubenswrapper[4844]: E0126 12:45:47.314021 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:47 crc kubenswrapper[4844]: I0126 12:45:47.936091 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zb9kx_467433a4-64be-4a14-beb2-657370e9865f/kube-multus/1.log" Jan 26 12:45:48 crc kubenswrapper[4844]: I0126 12:45:48.313137 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:48 crc kubenswrapper[4844]: E0126 12:45:48.313301 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:48 crc kubenswrapper[4844]: E0126 12:45:48.398184 4844 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 12:45:49 crc kubenswrapper[4844]: I0126 12:45:49.313053 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:49 crc kubenswrapper[4844]: E0126 12:45:49.313236 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:49 crc kubenswrapper[4844]: I0126 12:45:49.313295 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:49 crc kubenswrapper[4844]: I0126 12:45:49.313498 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:49 crc kubenswrapper[4844]: E0126 12:45:49.313495 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:49 crc kubenswrapper[4844]: E0126 12:45:49.313746 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:50 crc kubenswrapper[4844]: I0126 12:45:50.312864 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:50 crc kubenswrapper[4844]: E0126 12:45:50.313069 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:51 crc kubenswrapper[4844]: I0126 12:45:51.312354 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:51 crc kubenswrapper[4844]: E0126 12:45:51.312539 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:51 crc kubenswrapper[4844]: I0126 12:45:51.312729 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:51 crc kubenswrapper[4844]: I0126 12:45:51.312897 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:51 crc kubenswrapper[4844]: E0126 12:45:51.313009 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:51 crc kubenswrapper[4844]: E0126 12:45:51.313112 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:52 crc kubenswrapper[4844]: I0126 12:45:52.312743 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:52 crc kubenswrapper[4844]: E0126 12:45:52.312989 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:53 crc kubenswrapper[4844]: I0126 12:45:53.313185 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:53 crc kubenswrapper[4844]: I0126 12:45:53.313196 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:53 crc kubenswrapper[4844]: I0126 12:45:53.313332 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:53 crc kubenswrapper[4844]: E0126 12:45:53.314375 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:53 crc kubenswrapper[4844]: E0126 12:45:53.314954 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:53 crc kubenswrapper[4844]: E0126 12:45:53.314843 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:53 crc kubenswrapper[4844]: E0126 12:45:53.398756 4844 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 12:45:54 crc kubenswrapper[4844]: I0126 12:45:54.313154 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:54 crc kubenswrapper[4844]: E0126 12:45:54.313362 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:55 crc kubenswrapper[4844]: I0126 12:45:55.312717 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:55 crc kubenswrapper[4844]: I0126 12:45:55.312781 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:55 crc kubenswrapper[4844]: I0126 12:45:55.312723 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:55 crc kubenswrapper[4844]: E0126 12:45:55.312914 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:55 crc kubenswrapper[4844]: E0126 12:45:55.313034 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:55 crc kubenswrapper[4844]: E0126 12:45:55.313185 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:56 crc kubenswrapper[4844]: I0126 12:45:56.313791 4844 scope.go:117] "RemoveContainer" containerID="564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046" Jan 26 12:45:56 crc kubenswrapper[4844]: I0126 12:45:56.313793 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:56 crc kubenswrapper[4844]: E0126 12:45:56.314050 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:56 crc kubenswrapper[4844]: I0126 12:45:56.967553 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovnkube-controller/3.log" Jan 26 12:45:56 crc kubenswrapper[4844]: I0126 12:45:56.970425 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerStarted","Data":"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e"} Jan 26 12:45:56 crc kubenswrapper[4844]: I0126 12:45:56.970876 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:45:57 crc kubenswrapper[4844]: I0126 12:45:57.110425 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" podStartSLOduration=110.110403241 podStartE2EDuration="1m50.110403241s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:45:57.003701065 +0000 UTC m=+133.937068677" watchObservedRunningTime="2026-01-26 12:45:57.110403241 +0000 UTC m=+134.043770863" Jan 26 12:45:57 crc kubenswrapper[4844]: I0126 12:45:57.110668 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gxnj7"] Jan 26 12:45:57 crc kubenswrapper[4844]: I0126 12:45:57.110744 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:57 crc kubenswrapper[4844]: E0126 12:45:57.110842 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:45:57 crc kubenswrapper[4844]: I0126 12:45:57.313448 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:57 crc kubenswrapper[4844]: E0126 12:45:57.313536 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:57 crc kubenswrapper[4844]: I0126 12:45:57.313682 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:57 crc kubenswrapper[4844]: E0126 12:45:57.313724 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:57 crc kubenswrapper[4844]: I0126 12:45:57.313806 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:57 crc kubenswrapper[4844]: E0126 12:45:57.313846 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:58 crc kubenswrapper[4844]: E0126 12:45:58.400223 4844 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 12:45:59 crc kubenswrapper[4844]: I0126 12:45:59.312419 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:45:59 crc kubenswrapper[4844]: I0126 12:45:59.312472 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:45:59 crc kubenswrapper[4844]: I0126 12:45:59.312515 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:45:59 crc kubenswrapper[4844]: E0126 12:45:59.312578 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:45:59 crc kubenswrapper[4844]: E0126 12:45:59.312707 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:45:59 crc kubenswrapper[4844]: I0126 12:45:59.312744 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:45:59 crc kubenswrapper[4844]: E0126 12:45:59.312984 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:45:59 crc kubenswrapper[4844]: E0126 12:45:59.313079 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:46:01 crc kubenswrapper[4844]: I0126 12:46:01.312918 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:46:01 crc kubenswrapper[4844]: E0126 12:46:01.313071 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:46:01 crc kubenswrapper[4844]: I0126 12:46:01.313658 4844 scope.go:117] "RemoveContainer" containerID="9be6a90cf1d7f75bb43391968d164c8726b7626d7dc649cd85f10c4d13424ab9" Jan 26 12:46:01 crc kubenswrapper[4844]: I0126 12:46:01.313855 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:46:01 crc kubenswrapper[4844]: I0126 12:46:01.313920 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:46:01 crc kubenswrapper[4844]: I0126 12:46:01.313932 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:46:01 crc kubenswrapper[4844]: E0126 12:46:01.314163 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:46:01 crc kubenswrapper[4844]: E0126 12:46:01.313950 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:46:01 crc kubenswrapper[4844]: E0126 12:46:01.314160 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:46:02 crc kubenswrapper[4844]: I0126 12:46:02.993469 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zb9kx_467433a4-64be-4a14-beb2-657370e9865f/kube-multus/1.log" Jan 26 12:46:02 crc kubenswrapper[4844]: I0126 12:46:02.993523 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zb9kx" event={"ID":"467433a4-64be-4a14-beb2-657370e9865f","Type":"ContainerStarted","Data":"a9f5cfdf855b56723649119ff96f5158a782982b241f924bcc11eb87f705cc68"} Jan 26 12:46:03 crc kubenswrapper[4844]: I0126 12:46:03.312446 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:46:03 crc kubenswrapper[4844]: E0126 12:46:03.312586 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:46:03 crc kubenswrapper[4844]: I0126 12:46:03.312902 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:46:03 crc kubenswrapper[4844]: I0126 12:46:03.312951 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:46:03 crc kubenswrapper[4844]: E0126 12:46:03.313934 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:46:03 crc kubenswrapper[4844]: I0126 12:46:03.313979 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:46:03 crc kubenswrapper[4844]: E0126 12:46:03.314127 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:46:03 crc kubenswrapper[4844]: E0126 12:46:03.314460 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:46:03 crc kubenswrapper[4844]: E0126 12:46:03.401321 4844 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 12:46:05 crc kubenswrapper[4844]: I0126 12:46:05.312125 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:46:05 crc kubenswrapper[4844]: I0126 12:46:05.312235 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:46:05 crc kubenswrapper[4844]: I0126 12:46:05.312239 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:46:05 crc kubenswrapper[4844]: I0126 12:46:05.312379 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:46:05 crc kubenswrapper[4844]: E0126 12:46:05.312376 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:46:05 crc kubenswrapper[4844]: E0126 12:46:05.312513 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:46:05 crc kubenswrapper[4844]: E0126 12:46:05.312727 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:46:05 crc kubenswrapper[4844]: E0126 12:46:05.312855 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:46:07 crc kubenswrapper[4844]: I0126 12:46:07.312738 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:46:07 crc kubenswrapper[4844]: I0126 12:46:07.312772 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:46:07 crc kubenswrapper[4844]: E0126 12:46:07.312878 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 12:46:07 crc kubenswrapper[4844]: I0126 12:46:07.312793 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:46:07 crc kubenswrapper[4844]: I0126 12:46:07.312784 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:46:07 crc kubenswrapper[4844]: E0126 12:46:07.312994 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 12:46:07 crc kubenswrapper[4844]: E0126 12:46:07.313134 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxnj7" podUID="c69496f6-7f67-4cca-9c9f-420e5567b165" Jan 26 12:46:07 crc kubenswrapper[4844]: E0126 12:46:07.313206 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.312532 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.312685 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.312694 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.312571 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.313536 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.313717 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.313744 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:46:09 crc kubenswrapper[4844]: E0126 12:46:09.313799 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:48:11.313772037 +0000 UTC m=+268.247139809 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.315640 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.315845 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.315857 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.316231 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.318669 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.320481 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.344126 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.416226 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.416305 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.420335 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.423461 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.456580 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.648789 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.667838 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 12:46:09 crc kubenswrapper[4844]: I0126 12:46:09.674519 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:46:09 crc kubenswrapper[4844]: W0126 12:46:09.899534 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-410f3a477dd295249f24709a646e7bd7181d57b4e510de6b1dcc298565474dc7 WatchSource:0}: Error finding container 410f3a477dd295249f24709a646e7bd7181d57b4e510de6b1dcc298565474dc7: Status 404 returned error can't find the container with id 410f3a477dd295249f24709a646e7bd7181d57b4e510de6b1dcc298565474dc7 Jan 26 12:46:09 crc kubenswrapper[4844]: W0126 12:46:09.936469 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-0e737af812cde38ad1289ddf1b2ed7a356b8dd906c7065a508de58870d6a2bff WatchSource:0}: Error finding container 0e737af812cde38ad1289ddf1b2ed7a356b8dd906c7065a508de58870d6a2bff: Status 404 returned error can't find the container with id 0e737af812cde38ad1289ddf1b2ed7a356b8dd906c7065a508de58870d6a2bff Jan 26 12:46:10 crc kubenswrapper[4844]: I0126 12:46:10.018174 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"170fe3117f686a25a3597b75c9cb3db56aff405e21bf5437cefb70e25cdca9e3"} Jan 26 12:46:10 crc kubenswrapper[4844]: I0126 12:46:10.019009 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0e737af812cde38ad1289ddf1b2ed7a356b8dd906c7065a508de58870d6a2bff"} Jan 26 12:46:10 crc kubenswrapper[4844]: I0126 12:46:10.019687 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"410f3a477dd295249f24709a646e7bd7181d57b4e510de6b1dcc298565474dc7"} Jan 26 12:46:12 crc kubenswrapper[4844]: I0126 12:46:12.030415 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"93a9ff0a0181efc901546d7a1aa3be6f3cb074f4aa2ef1a8c1abbf7c5319f894"} Jan 26 12:46:12 crc kubenswrapper[4844]: I0126 12:46:12.033115 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"69055903c794e45ee3feca5c9da0acb9a71ce865a257fa42db83c47e97c0aa5f"} Jan 26 12:46:12 crc kubenswrapper[4844]: I0126 12:46:12.033294 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:46:12 crc kubenswrapper[4844]: I0126 12:46:12.034239 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"604e1aa50a43f3a9eb625bdf4f461832d7fdc1d3790ccec8bfe7b30eeb05d598"} Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.494530 4844 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.537809 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rtks2"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.538516 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.545059 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-vhsn2"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.545306 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.545411 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.546078 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.546383 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.546411 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.547384 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfwgn"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.548233 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfwgn" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.560413 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.565214 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.569844 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.570063 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.572502 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.573264 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.573411 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.573430 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.573734 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.573879 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.574524 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.574688 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.574933 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.575304 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.575325 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.580690 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rlnfh"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.582623 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbwgg"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.583702 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbwgg" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.584339 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.603296 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.603586 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.603815 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.606102 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.606666 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vzrkt"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.607106 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vzrkt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.607260 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.608139 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zsn9c"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.608778 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.609307 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.609792 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zsn9c" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.610846 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmk5t"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.611551 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.611643 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.618837 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fzvnx"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.619477 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.620955 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.622163 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/45322811-c744-4cce-a307-088c0bc3965a-etcd-client\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.622250 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98vsv\" (UniqueName: \"kubernetes.io/projected/45322811-c744-4cce-a307-088c0bc3965a-kube-api-access-98vsv\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.622293 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/45322811-c744-4cce-a307-088c0bc3965a-image-import-ca\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.622330 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45322811-c744-4cce-a307-088c0bc3965a-node-pullsecrets\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.622369 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/45322811-c744-4cce-a307-088c0bc3965a-encryption-config\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.622407 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45322811-c744-4cce-a307-088c0bc3965a-config\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.622453 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/45322811-c744-4cce-a307-088c0bc3965a-etcd-serving-ca\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.622487 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45322811-c744-4cce-a307-088c0bc3965a-audit-dir\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.622523 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/45322811-c744-4cce-a307-088c0bc3965a-audit\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.622555 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45322811-c744-4cce-a307-088c0bc3965a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.622592 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45322811-c744-4cce-a307-088c0bc3965a-serving-cert\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.625314 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.625649 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.625793 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.626023 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.626092 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.626042 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.626318 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.626343 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.627101 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.627892 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.629978 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.630325 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.638735 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.642251 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.643685 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.644541 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.645028 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8j2r"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.645390 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8j2r" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.645870 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.647677 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.648436 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.648593 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.659123 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-5rkhb"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.659610 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-89xb7"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.659846 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.660273 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.660874 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5rkhb" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.661554 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.661545 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.661667 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.661807 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.662069 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.662130 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.664446 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pmxvg"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.665334 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6zcv5"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.665773 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7fzwr"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.666239 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7fzwr" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.666423 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pmxvg" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.666558 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6zcv5" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.671623 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.671906 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.674652 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-75rtp"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.675397 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.675807 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.676305 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-75rtp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.676386 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vvlfw"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.676908 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vvlfw" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.679532 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dwwm9"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.680010 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.680782 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-9pkgp"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.682186 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.697340 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.697671 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.698144 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.698305 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.711300 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.711668 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.712060 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.712126 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.712562 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.738210 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.739818 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.739959 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.740065 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.740163 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.740278 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.740374 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.740473 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.740822 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.742012 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9cmnk"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.742675 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0c1c2a13-ee4c-4ced-9799-a1332e4e134f-proxy-tls\") pod \"machine-config-operator-74547568cd-n8hpb\" (UID: \"0c1c2a13-ee4c-4ced-9799-a1332e4e134f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.742710 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4fd9b862-74de-4579-9b30-b51e5cbd3b56-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zsn9c\" (UID: \"4fd9b862-74de-4579-9b30-b51e5cbd3b56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsn9c" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.742755 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98vsv\" (UniqueName: \"kubernetes.io/projected/45322811-c744-4cce-a307-088c0bc3965a-kube-api-access-98vsv\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.742783 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.742811 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.742835 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg78q\" (UniqueName: \"kubernetes.io/projected/94726f3c-782c-4f4c-89cc-60229b8f339a-kube-api-access-zg78q\") pod \"ingress-operator-5b745b69d9-hpxdc\" (UID: \"94726f3c-782c-4f4c-89cc-60229b8f339a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.742859 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/45322811-c744-4cce-a307-088c0bc3965a-image-import-ca\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.742883 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8269d7d3-678d-44d5-885e-c5716e8024d8-console-oauth-config\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.742905 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wq4f\" (UniqueName: \"kubernetes.io/projected/0c1c2a13-ee4c-4ced-9799-a1332e4e134f-kube-api-access-6wq4f\") pod \"machine-config-operator-74547568cd-n8hpb\" (UID: \"0c1c2a13-ee4c-4ced-9799-a1332e4e134f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.742928 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94726f3c-782c-4f4c-89cc-60229b8f339a-metrics-tls\") pod \"ingress-operator-5b745b69d9-hpxdc\" (UID: \"94726f3c-782c-4f4c-89cc-60229b8f339a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.742954 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45322811-c744-4cce-a307-088c0bc3965a-node-pullsecrets\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.742977 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/038469c2-c803-45d5-aaa5-d81663f41345-serving-cert\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.742998 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-console-config\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.743019 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e6a96cc6-703f-4104-8ff8-53c3cafb2227-audit-dir\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.743045 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdgmr\" (UniqueName: \"kubernetes.io/projected/7ec10c36-d3de-409c-a3d6-3cde63c0b206-kube-api-access-pdgmr\") pod \"openshift-config-operator-7777fb866f-c8rpj\" (UID: \"7ec10c36-d3de-409c-a3d6-3cde63c0b206\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.743069 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94726f3c-782c-4f4c-89cc-60229b8f339a-trusted-ca\") pod \"ingress-operator-5b745b69d9-hpxdc\" (UID: \"94726f3c-782c-4f4c-89cc-60229b8f339a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.743098 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a-config\") pod \"console-operator-58897d9998-vzrkt\" (UID: \"2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a\") " pod="openshift-console-operator/console-operator-58897d9998-vzrkt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.743124 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/45322811-c744-4cce-a307-088c0bc3965a-encryption-config\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.743150 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/038469c2-c803-45d5-aaa5-d81663f41345-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.743173 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/038469c2-c803-45d5-aaa5-d81663f41345-audit-dir\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.754762 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-serving-cert\") pod \"controller-manager-879f6c89f-rlnfh\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.754851 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd9b862-74de-4579-9b30-b51e5cbd3b56-config\") pod \"machine-api-operator-5694c8668f-zsn9c\" (UID: \"4fd9b862-74de-4579-9b30-b51e5cbd3b56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsn9c" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.754919 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45322811-c744-4cce-a307-088c0bc3965a-config\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.754962 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755006 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/45322811-c744-4cce-a307-088c0bc3965a-etcd-serving-ca\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755043 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45322811-c744-4cce-a307-088c0bc3965a-audit-dir\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755084 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/038469c2-c803-45d5-aaa5-d81663f41345-encryption-config\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755117 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2dgs\" (UniqueName: \"kubernetes.io/projected/8269d7d3-678d-44d5-885e-c5716e8024d8-kube-api-access-p2dgs\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755150 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q44z9\" (UniqueName: \"kubernetes.io/projected/7129ebfc-8ee6-475d-81d7-dcc6a9d6a6e6-kube-api-access-q44z9\") pod \"cluster-samples-operator-665b6dd947-jfwgn\" (UID: \"7129ebfc-8ee6-475d-81d7-dcc6a9d6a6e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfwgn" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755186 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8h5r\" (UniqueName: \"kubernetes.io/projected/a537a695-5721-4eae-a5f7-6df14075f458-kube-api-access-q8h5r\") pod \"authentication-operator-69f744f599-fzvnx\" (UID: \"a537a695-5721-4eae-a5f7-6df14075f458\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755230 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgq6v\" (UniqueName: \"kubernetes.io/projected/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-kube-api-access-sgq6v\") pod \"controller-manager-879f6c89f-rlnfh\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755264 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0c1c2a13-ee4c-4ced-9799-a1332e4e134f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-n8hpb\" (UID: \"0c1c2a13-ee4c-4ced-9799-a1332e4e134f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755309 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf82h\" (UniqueName: \"kubernetes.io/projected/4c91bd8e-040a-4961-8a7f-2fbeacff5b50-kube-api-access-jf82h\") pod \"openshift-apiserver-operator-796bbdcf4f-fbwgg\" (UID: \"4c91bd8e-040a-4961-8a7f-2fbeacff5b50\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbwgg" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755357 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45322811-c744-4cce-a307-088c0bc3965a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755401 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a537a695-5721-4eae-a5f7-6df14075f458-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fzvnx\" (UID: \"a537a695-5721-4eae-a5f7-6df14075f458\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755440 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-service-ca\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755477 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1aeb70f5-e543-4f51-bcf7-605df435f80e-machine-approver-tls\") pod \"machine-approver-56656f9798-sbrtp\" (UID: \"1aeb70f5-e543-4f51-bcf7-605df435f80e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755503 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a-trusted-ca\") pod \"console-operator-58897d9998-vzrkt\" (UID: \"2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a\") " pod="openshift-console-operator/console-operator-58897d9998-vzrkt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755531 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755568 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45322811-c744-4cce-a307-088c0bc3965a-serving-cert\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755668 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-config\") pod \"controller-manager-879f6c89f-rlnfh\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755702 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755727 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755758 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1aeb70f5-e543-4f51-bcf7-605df435f80e-auth-proxy-config\") pod \"machine-approver-56656f9798-sbrtp\" (UID: \"1aeb70f5-e543-4f51-bcf7-605df435f80e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755781 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a537a695-5721-4eae-a5f7-6df14075f458-config\") pod \"authentication-operator-69f744f599-fzvnx\" (UID: \"a537a695-5721-4eae-a5f7-6df14075f458\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755807 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/038469c2-c803-45d5-aaa5-d81663f41345-audit-policies\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755830 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b21e7f91-3226-493e-bbfb-89b33296e74e-config\") pod \"route-controller-manager-6576b87f9c-f5gx4\" (UID: \"b21e7f91-3226-493e-bbfb-89b33296e74e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.742681 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755854 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh9st\" (UniqueName: \"kubernetes.io/projected/1aeb70f5-e543-4f51-bcf7-605df435f80e-kube-api-access-wh9st\") pod \"machine-approver-56656f9798-sbrtp\" (UID: \"1aeb70f5-e543-4f51-bcf7-605df435f80e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755885 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqfz5\" (UniqueName: \"kubernetes.io/projected/038469c2-c803-45d5-aaa5-d81663f41345-kube-api-access-sqfz5\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755910 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ec10c36-d3de-409c-a3d6-3cde63c0b206-serving-cert\") pod \"openshift-config-operator-7777fb866f-c8rpj\" (UID: \"7ec10c36-d3de-409c-a3d6-3cde63c0b206\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755935 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8269d7d3-678d-44d5-885e-c5716e8024d8-console-serving-cert\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.755984 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4fd9b862-74de-4579-9b30-b51e5cbd3b56-images\") pod \"machine-api-operator-5694c8668f-zsn9c\" (UID: \"4fd9b862-74de-4579-9b30-b51e5cbd3b56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsn9c" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.756009 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.756031 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c91bd8e-040a-4961-8a7f-2fbeacff5b50-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fbwgg\" (UID: \"4c91bd8e-040a-4961-8a7f-2fbeacff5b50\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbwgg" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.756049 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rstlz\" (UniqueName: \"kubernetes.io/projected/4fd9b862-74de-4579-9b30-b51e5cbd3b56-kube-api-access-rstlz\") pod \"machine-api-operator-5694c8668f-zsn9c\" (UID: \"4fd9b862-74de-4579-9b30-b51e5cbd3b56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsn9c" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.756081 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bcgw\" (UniqueName: \"kubernetes.io/projected/e6a96cc6-703f-4104-8ff8-53c3cafb2227-kube-api-access-6bcgw\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.756097 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a537a695-5721-4eae-a5f7-6df14075f458-service-ca-bundle\") pod \"authentication-operator-69f744f599-fzvnx\" (UID: \"a537a695-5721-4eae-a5f7-6df14075f458\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.756113 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhghl\" (UniqueName: \"kubernetes.io/projected/2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a-kube-api-access-xhghl\") pod \"console-operator-58897d9998-vzrkt\" (UID: \"2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a\") " pod="openshift-console-operator/console-operator-58897d9998-vzrkt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.756131 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-audit-policies\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.756149 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b21e7f91-3226-493e-bbfb-89b33296e74e-serving-cert\") pod \"route-controller-manager-6576b87f9c-f5gx4\" (UID: \"b21e7f91-3226-493e-bbfb-89b33296e74e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.756167 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a-serving-cert\") pod \"console-operator-58897d9998-vzrkt\" (UID: \"2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a\") " pod="openshift-console-operator/console-operator-58897d9998-vzrkt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.744410 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45322811-c744-4cce-a307-088c0bc3965a-node-pullsecrets\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.756209 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn87w\" (UniqueName: \"kubernetes.io/projected/b21e7f91-3226-493e-bbfb-89b33296e74e-kube-api-access-mn87w\") pod \"route-controller-manager-6576b87f9c-f5gx4\" (UID: \"b21e7f91-3226-493e-bbfb-89b33296e74e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.756232 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94726f3c-782c-4f4c-89cc-60229b8f339a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hpxdc\" (UID: \"94726f3c-782c-4f4c-89cc-60229b8f339a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.756254 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aeb70f5-e543-4f51-bcf7-605df435f80e-config\") pod \"machine-approver-56656f9798-sbrtp\" (UID: \"1aeb70f5-e543-4f51-bcf7-605df435f80e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.756281 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rlnfh\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.756307 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7129ebfc-8ee6-475d-81d7-dcc6a9d6a6e6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jfwgn\" (UID: \"7129ebfc-8ee6-475d-81d7-dcc6a9d6a6e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfwgn" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.756334 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.751188 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fl26p"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.766452 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45322811-c744-4cce-a307-088c0bc3965a-audit-dir\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.746140 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.746254 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.746293 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.746293 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.746766 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.768110 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.756359 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c91bd8e-040a-4961-8a7f-2fbeacff5b50-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fbwgg\" (UID: \"4c91bd8e-040a-4961-8a7f-2fbeacff5b50\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbwgg" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.771187 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/038469c2-c803-45d5-aaa5-d81663f41345-etcd-client\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.771209 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-oauth-serving-cert\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.771227 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-client-ca\") pod \"controller-manager-879f6c89f-rlnfh\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.771258 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/45322811-c744-4cce-a307-088c0bc3965a-audit\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.771280 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.771305 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.771327 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7ec10c36-d3de-409c-a3d6-3cde63c0b206-available-featuregates\") pod \"openshift-config-operator-7777fb866f-c8rpj\" (UID: \"7ec10c36-d3de-409c-a3d6-3cde63c0b206\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.771345 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-trusted-ca-bundle\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.771362 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.771380 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a537a695-5721-4eae-a5f7-6df14075f458-serving-cert\") pod \"authentication-operator-69f744f599-fzvnx\" (UID: \"a537a695-5721-4eae-a5f7-6df14075f458\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.771400 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/038469c2-c803-45d5-aaa5-d81663f41345-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.771427 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/45322811-c744-4cce-a307-088c0bc3965a-etcd-client\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.771446 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0c1c2a13-ee4c-4ced-9799-a1332e4e134f-images\") pod \"machine-config-operator-74547568cd-n8hpb\" (UID: \"0c1c2a13-ee4c-4ced-9799-a1332e4e134f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.771466 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b21e7f91-3226-493e-bbfb-89b33296e74e-client-ca\") pod \"route-controller-manager-6576b87f9c-f5gx4\" (UID: \"b21e7f91-3226-493e-bbfb-89b33296e74e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.771702 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/45322811-c744-4cce-a307-088c0bc3965a-image-import-ca\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.771970 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.746844 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.747050 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.747126 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.747510 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.747563 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.772416 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.747622 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.747693 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.747729 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.747800 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.747844 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.747893 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.747946 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.748000 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.748153 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.748215 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.748263 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.748321 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.748383 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.748441 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.748501 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.748562 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.748641 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.748697 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.748755 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.748804 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.749106 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.749230 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.751433 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.754258 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.754314 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.754310 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.754354 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.754501 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.772256 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.773772 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.774915 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.775895 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qltc7"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.776331 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xcs68"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.776741 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl5ts"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.776774 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/45322811-c744-4cce-a307-088c0bc3965a-encryption-config\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.776944 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.777020 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.777047 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rtr85"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.777220 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl26p" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.777390 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sgslp"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.777678 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xcs68" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.777896 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.777938 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl5ts" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.778055 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rtr85" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.778374 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.778445 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sgslp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.778468 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qltc7" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.778972 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.780611 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45322811-c744-4cce-a307-088c0bc3965a-config\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.797078 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/45322811-c744-4cce-a307-088c0bc3965a-etcd-client\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.799026 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/45322811-c744-4cce-a307-088c0bc3965a-etcd-serving-ca\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.799218 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/45322811-c744-4cce-a307-088c0bc3965a-audit\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.802957 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.804246 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45322811-c744-4cce-a307-088c0bc3965a-serving-cert\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.827483 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.830956 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.832047 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.834509 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bj9c4"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.835369 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vhsn2"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.835484 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bj9c4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.835590 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfwgn"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.835845 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.836895 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.837028 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45322811-c744-4cce-a307-088c0bc3965a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.837547 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6zcv5"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.838737 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.839280 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.841720 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rlnfh"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.841989 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.844266 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.844295 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbwgg"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.845226 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-75rtp"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.849394 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.850362 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fzvnx"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.852854 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.852889 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8j2r"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.853976 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmk5t"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.855093 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vzrkt"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.856643 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-fnd9b"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.857371 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fnd9b" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.858707 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.859834 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-b6r5v"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.861269 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-r8j24"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.861385 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.861637 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.862190 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.862292 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-r8j24" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.863452 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9cmnk"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.864922 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.866960 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.868844 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.870037 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vvlfw"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.871120 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pmxvg"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.872111 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94726f3c-782c-4f4c-89cc-60229b8f339a-metrics-tls\") pod \"ingress-operator-5b745b69d9-hpxdc\" (UID: \"94726f3c-782c-4f4c-89cc-60229b8f339a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.872159 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0b0b2321-3f0f-4889-acad-bb7b10f96043-signing-key\") pod \"service-ca-9c57cc56f-vvlfw\" (UID: \"0b0b2321-3f0f-4889-acad-bb7b10f96043\") " pod="openshift-service-ca/service-ca-9c57cc56f-vvlfw" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.872195 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/038469c2-c803-45d5-aaa5-d81663f41345-serving-cert\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.872222 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-console-config\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.872247 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdgmr\" (UniqueName: \"kubernetes.io/projected/7ec10c36-d3de-409c-a3d6-3cde63c0b206-kube-api-access-pdgmr\") pod \"openshift-config-operator-7777fb866f-c8rpj\" (UID: \"7ec10c36-d3de-409c-a3d6-3cde63c0b206\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.872271 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e6a96cc6-703f-4104-8ff8-53c3cafb2227-audit-dir\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.872295 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94726f3c-782c-4f4c-89cc-60229b8f339a-trusted-ca\") pod \"ingress-operator-5b745b69d9-hpxdc\" (UID: \"94726f3c-782c-4f4c-89cc-60229b8f339a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.872367 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e6a96cc6-703f-4104-8ff8-53c3cafb2227-audit-dir\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.872478 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5rkhb"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.873219 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-console-config\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.873281 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e278457d-db19-47bc-a2a5-6ff0e994aace-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g8j2r\" (UID: \"e278457d-db19-47bc-a2a5-6ff0e994aace\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8j2r" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.873344 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0735aeec-55b6-4140-8c72-d11b656ddb07-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fs4g6\" (UID: \"0735aeec-55b6-4140-8c72-d11b656ddb07\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.873409 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/038469c2-c803-45d5-aaa5-d81663f41345-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.873455 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/038469c2-c803-45d5-aaa5-d81663f41345-audit-dir\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.873521 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/038469c2-c803-45d5-aaa5-d81663f41345-audit-dir\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.873558 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7fzwr"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.873702 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94726f3c-782c-4f4c-89cc-60229b8f339a-trusted-ca\") pod \"ingress-operator-5b745b69d9-hpxdc\" (UID: \"94726f3c-782c-4f4c-89cc-60229b8f339a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.873589 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-serving-cert\") pod \"controller-manager-879f6c89f-rlnfh\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.873977 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a-config\") pod \"console-operator-58897d9998-vzrkt\" (UID: \"2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a\") " pod="openshift-console-operator/console-operator-58897d9998-vzrkt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874014 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd9b862-74de-4579-9b30-b51e5cbd3b56-config\") pod \"machine-api-operator-5694c8668f-zsn9c\" (UID: \"4fd9b862-74de-4579-9b30-b51e5cbd3b56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsn9c" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874042 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3875ab05-c190-4557-a863-84b3c123fe26-srv-cert\") pod \"catalog-operator-68c6474976-scvs4\" (UID: \"3875ab05-c190-4557-a863-84b3c123fe26\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874108 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874134 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt8hp\" (UniqueName: \"kubernetes.io/projected/f27f4e56-71ef-43e6-be78-20759a8e9ed5-kube-api-access-tt8hp\") pod \"machine-config-controller-84d6567774-6zcv5\" (UID: \"f27f4e56-71ef-43e6-be78-20759a8e9ed5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6zcv5" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874155 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2e87ef7d-a670-47ae-8a85-cfc07a848430-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ksxk5\" (UID: \"2e87ef7d-a670-47ae-8a85-cfc07a848430\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874175 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q44z9\" (UniqueName: \"kubernetes.io/projected/7129ebfc-8ee6-475d-81d7-dcc6a9d6a6e6-kube-api-access-q44z9\") pod \"cluster-samples-operator-665b6dd947-jfwgn\" (UID: \"7129ebfc-8ee6-475d-81d7-dcc6a9d6a6e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfwgn" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874197 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8h5r\" (UniqueName: \"kubernetes.io/projected/a537a695-5721-4eae-a5f7-6df14075f458-kube-api-access-q8h5r\") pod \"authentication-operator-69f744f599-fzvnx\" (UID: \"a537a695-5721-4eae-a5f7-6df14075f458\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874215 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/038469c2-c803-45d5-aaa5-d81663f41345-encryption-config\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874234 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2dgs\" (UniqueName: \"kubernetes.io/projected/8269d7d3-678d-44d5-885e-c5716e8024d8-kube-api-access-p2dgs\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874254 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0c1c2a13-ee4c-4ced-9799-a1332e4e134f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-n8hpb\" (UID: \"0c1c2a13-ee4c-4ced-9799-a1332e4e134f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874272 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf82h\" (UniqueName: \"kubernetes.io/projected/4c91bd8e-040a-4961-8a7f-2fbeacff5b50-kube-api-access-jf82h\") pod \"openshift-apiserver-operator-796bbdcf4f-fbwgg\" (UID: \"4c91bd8e-040a-4961-8a7f-2fbeacff5b50\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbwgg" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874289 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgq6v\" (UniqueName: \"kubernetes.io/projected/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-kube-api-access-sgq6v\") pod \"controller-manager-879f6c89f-rlnfh\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874305 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a537a695-5721-4eae-a5f7-6df14075f458-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fzvnx\" (UID: \"a537a695-5721-4eae-a5f7-6df14075f458\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874323 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rcd7\" (UniqueName: \"kubernetes.io/projected/49ce2590-a0c6-4e75-af35-73bb211e6829-kube-api-access-4rcd7\") pod \"dns-operator-744455d44c-75rtp\" (UID: \"49ce2590-a0c6-4e75-af35-73bb211e6829\") " pod="openshift-dns-operator/dns-operator-744455d44c-75rtp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874342 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-875vr\" (UniqueName: \"kubernetes.io/projected/43fa0cde-7ba5-4788-be26-1170bf6ee75d-kube-api-access-875vr\") pod \"multus-admission-controller-857f4d67dd-7fzwr\" (UID: \"43fa0cde-7ba5-4788-be26-1170bf6ee75d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7fzwr" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874360 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-service-ca\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874377 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1aeb70f5-e543-4f51-bcf7-605df435f80e-machine-approver-tls\") pod \"machine-approver-56656f9798-sbrtp\" (UID: \"1aeb70f5-e543-4f51-bcf7-605df435f80e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874394 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a-trusted-ca\") pod \"console-operator-58897d9998-vzrkt\" (UID: \"2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a\") " pod="openshift-console-operator/console-operator-58897d9998-vzrkt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874409 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-config\") pod \"controller-manager-879f6c89f-rlnfh\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874427 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874442 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874460 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874477 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/49ce2590-a0c6-4e75-af35-73bb211e6829-metrics-tls\") pod \"dns-operator-744455d44c-75rtp\" (UID: \"49ce2590-a0c6-4e75-af35-73bb211e6829\") " pod="openshift-dns-operator/dns-operator-744455d44c-75rtp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874493 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a537a695-5721-4eae-a5f7-6df14075f458-config\") pod \"authentication-operator-69f744f599-fzvnx\" (UID: \"a537a695-5721-4eae-a5f7-6df14075f458\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874509 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e87ef7d-a670-47ae-8a85-cfc07a848430-srv-cert\") pod \"olm-operator-6b444d44fb-ksxk5\" (UID: \"2e87ef7d-a670-47ae-8a85-cfc07a848430\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874525 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1aeb70f5-e543-4f51-bcf7-605df435f80e-auth-proxy-config\") pod \"machine-approver-56656f9798-sbrtp\" (UID: \"1aeb70f5-e543-4f51-bcf7-605df435f80e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874543 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh9st\" (UniqueName: \"kubernetes.io/projected/1aeb70f5-e543-4f51-bcf7-605df435f80e-kube-api-access-wh9st\") pod \"machine-approver-56656f9798-sbrtp\" (UID: \"1aeb70f5-e543-4f51-bcf7-605df435f80e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874561 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99n86\" (UniqueName: \"kubernetes.io/projected/b428addf-b196-461c-aaaf-7b9b14848a6c-kube-api-access-99n86\") pod \"downloads-7954f5f757-5rkhb\" (UID: \"b428addf-b196-461c-aaaf-7b9b14848a6c\") " pod="openshift-console/downloads-7954f5f757-5rkhb" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874580 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/038469c2-c803-45d5-aaa5-d81663f41345-audit-policies\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874622 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b21e7f91-3226-493e-bbfb-89b33296e74e-config\") pod \"route-controller-manager-6576b87f9c-f5gx4\" (UID: \"b21e7f91-3226-493e-bbfb-89b33296e74e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874643 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ec10c36-d3de-409c-a3d6-3cde63c0b206-serving-cert\") pod \"openshift-config-operator-7777fb866f-c8rpj\" (UID: \"7ec10c36-d3de-409c-a3d6-3cde63c0b206\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874661 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8269d7d3-678d-44d5-885e-c5716e8024d8-console-serving-cert\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874678 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqfz5\" (UniqueName: \"kubernetes.io/projected/038469c2-c803-45d5-aaa5-d81663f41345-kube-api-access-sqfz5\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874702 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a-config\") pod \"console-operator-58897d9998-vzrkt\" (UID: \"2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a\") " pod="openshift-console-operator/console-operator-58897d9998-vzrkt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874709 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krlfl\" (UniqueName: \"kubernetes.io/projected/0735aeec-55b6-4140-8c72-d11b656ddb07-kube-api-access-krlfl\") pod \"cluster-image-registry-operator-dc59b4c8b-fs4g6\" (UID: \"0735aeec-55b6-4140-8c72-d11b656ddb07\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874764 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4fd9b862-74de-4579-9b30-b51e5cbd3b56-images\") pod \"machine-api-operator-5694c8668f-zsn9c\" (UID: \"4fd9b862-74de-4579-9b30-b51e5cbd3b56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsn9c" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874784 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874802 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c91bd8e-040a-4961-8a7f-2fbeacff5b50-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fbwgg\" (UID: \"4c91bd8e-040a-4961-8a7f-2fbeacff5b50\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbwgg" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874820 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rstlz\" (UniqueName: \"kubernetes.io/projected/4fd9b862-74de-4579-9b30-b51e5cbd3b56-kube-api-access-rstlz\") pod \"machine-api-operator-5694c8668f-zsn9c\" (UID: \"4fd9b862-74de-4579-9b30-b51e5cbd3b56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsn9c" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874843 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f27f4e56-71ef-43e6-be78-20759a8e9ed5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6zcv5\" (UID: \"f27f4e56-71ef-43e6-be78-20759a8e9ed5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6zcv5" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874863 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bcgw\" (UniqueName: \"kubernetes.io/projected/e6a96cc6-703f-4104-8ff8-53c3cafb2227-kube-api-access-6bcgw\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874881 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a537a695-5721-4eae-a5f7-6df14075f458-service-ca-bundle\") pod \"authentication-operator-69f744f599-fzvnx\" (UID: \"a537a695-5721-4eae-a5f7-6df14075f458\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874896 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhghl\" (UniqueName: \"kubernetes.io/projected/2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a-kube-api-access-xhghl\") pod \"console-operator-58897d9998-vzrkt\" (UID: \"2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a\") " pod="openshift-console-operator/console-operator-58897d9998-vzrkt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874916 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e278457d-db19-47bc-a2a5-6ff0e994aace-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g8j2r\" (UID: \"e278457d-db19-47bc-a2a5-6ff0e994aace\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8j2r" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874938 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3875ab05-c190-4557-a863-84b3c123fe26-profile-collector-cert\") pod \"catalog-operator-68c6474976-scvs4\" (UID: \"3875ab05-c190-4557-a863-84b3c123fe26\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874969 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-audit-policies\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874987 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b21e7f91-3226-493e-bbfb-89b33296e74e-serving-cert\") pod \"route-controller-manager-6576b87f9c-f5gx4\" (UID: \"b21e7f91-3226-493e-bbfb-89b33296e74e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875005 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a-serving-cert\") pod \"console-operator-58897d9998-vzrkt\" (UID: \"2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a\") " pod="openshift-console-operator/console-operator-58897d9998-vzrkt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875022 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn87w\" (UniqueName: \"kubernetes.io/projected/b21e7f91-3226-493e-bbfb-89b33296e74e-kube-api-access-mn87w\") pod \"route-controller-manager-6576b87f9c-f5gx4\" (UID: \"b21e7f91-3226-493e-bbfb-89b33296e74e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875038 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94726f3c-782c-4f4c-89cc-60229b8f339a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hpxdc\" (UID: \"94726f3c-782c-4f4c-89cc-60229b8f339a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875059 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aeb70f5-e543-4f51-bcf7-605df435f80e-config\") pod \"machine-approver-56656f9798-sbrtp\" (UID: \"1aeb70f5-e543-4f51-bcf7-605df435f80e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875075 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rlnfh\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875091 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7129ebfc-8ee6-475d-81d7-dcc6a9d6a6e6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jfwgn\" (UID: \"7129ebfc-8ee6-475d-81d7-dcc6a9d6a6e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfwgn" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875112 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0b0b2321-3f0f-4889-acad-bb7b10f96043-signing-cabundle\") pod \"service-ca-9c57cc56f-vvlfw\" (UID: \"0b0b2321-3f0f-4889-acad-bb7b10f96043\") " pod="openshift-service-ca/service-ca-9c57cc56f-vvlfw" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875132 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6d85\" (UniqueName: \"kubernetes.io/projected/71551b91-3a04-4dcd-9a94-e96b4663b040-kube-api-access-n6d85\") pod \"package-server-manager-789f6589d5-pmxvg\" (UID: \"71551b91-3a04-4dcd-9a94-e96b4663b040\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pmxvg" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875158 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64jrz\" (UniqueName: \"kubernetes.io/projected/0b0b2321-3f0f-4889-acad-bb7b10f96043-kube-api-access-64jrz\") pod \"service-ca-9c57cc56f-vvlfw\" (UID: \"0b0b2321-3f0f-4889-acad-bb7b10f96043\") " pod="openshift-service-ca/service-ca-9c57cc56f-vvlfw" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875192 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875216 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c91bd8e-040a-4961-8a7f-2fbeacff5b50-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fbwgg\" (UID: \"4c91bd8e-040a-4961-8a7f-2fbeacff5b50\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbwgg" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875234 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-client-ca\") pod \"controller-manager-879f6c89f-rlnfh\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875258 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/038469c2-c803-45d5-aaa5-d81663f41345-etcd-client\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875278 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-oauth-serving-cert\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875298 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875318 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb5bn\" (UniqueName: \"kubernetes.io/projected/e278457d-db19-47bc-a2a5-6ff0e994aace-kube-api-access-cb5bn\") pod \"openshift-controller-manager-operator-756b6f6bc6-g8j2r\" (UID: \"e278457d-db19-47bc-a2a5-6ff0e994aace\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8j2r" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875337 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/43fa0cde-7ba5-4788-be26-1170bf6ee75d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7fzwr\" (UID: \"43fa0cde-7ba5-4788-be26-1170bf6ee75d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7fzwr" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875363 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875382 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd9b862-74de-4579-9b30-b51e5cbd3b56-config\") pod \"machine-api-operator-5694c8668f-zsn9c\" (UID: \"4fd9b862-74de-4579-9b30-b51e5cbd3b56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsn9c" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875382 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875429 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a537a695-5721-4eae-a5f7-6df14075f458-serving-cert\") pod \"authentication-operator-69f744f599-fzvnx\" (UID: \"a537a695-5721-4eae-a5f7-6df14075f458\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875451 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7ec10c36-d3de-409c-a3d6-3cde63c0b206-available-featuregates\") pod \"openshift-config-operator-7777fb866f-c8rpj\" (UID: \"7ec10c36-d3de-409c-a3d6-3cde63c0b206\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875470 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-trusted-ca-bundle\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875489 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/038469c2-c803-45d5-aaa5-d81663f41345-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875512 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0c1c2a13-ee4c-4ced-9799-a1332e4e134f-images\") pod \"machine-config-operator-74547568cd-n8hpb\" (UID: \"0c1c2a13-ee4c-4ced-9799-a1332e4e134f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875778 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0735aeec-55b6-4140-8c72-d11b656ddb07-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fs4g6\" (UID: \"0735aeec-55b6-4140-8c72-d11b656ddb07\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875799 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94726f3c-782c-4f4c-89cc-60229b8f339a-metrics-tls\") pod \"ingress-operator-5b745b69d9-hpxdc\" (UID: \"94726f3c-782c-4f4c-89cc-60229b8f339a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875808 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/71551b91-3a04-4dcd-9a94-e96b4663b040-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pmxvg\" (UID: \"71551b91-3a04-4dcd-9a94-e96b4663b040\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pmxvg" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875837 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b21e7f91-3226-493e-bbfb-89b33296e74e-client-ca\") pod \"route-controller-manager-6576b87f9c-f5gx4\" (UID: \"b21e7f91-3226-493e-bbfb-89b33296e74e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.874516 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/038469c2-c803-45d5-aaa5-d81663f41345-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875923 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.875234 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bj9c4"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.876037 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dwwm9"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.876523 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a537a695-5721-4eae-a5f7-6df14075f458-service-ca-bundle\") pod \"authentication-operator-69f744f599-fzvnx\" (UID: \"a537a695-5721-4eae-a5f7-6df14075f458\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.876711 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4fd9b862-74de-4579-9b30-b51e5cbd3b56-images\") pod \"machine-api-operator-5694c8668f-zsn9c\" (UID: \"4fd9b862-74de-4579-9b30-b51e5cbd3b56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsn9c" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.876755 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-serving-cert\") pod \"controller-manager-879f6c89f-rlnfh\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.876982 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f27f4e56-71ef-43e6-be78-20759a8e9ed5-proxy-tls\") pod \"machine-config-controller-84d6567774-6zcv5\" (UID: \"f27f4e56-71ef-43e6-be78-20759a8e9ed5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6zcv5" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.877016 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0c1c2a13-ee4c-4ced-9799-a1332e4e134f-proxy-tls\") pod \"machine-config-operator-74547568cd-n8hpb\" (UID: \"0c1c2a13-ee4c-4ced-9799-a1332e4e134f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.877037 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4fd9b862-74de-4579-9b30-b51e5cbd3b56-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zsn9c\" (UID: \"4fd9b862-74de-4579-9b30-b51e5cbd3b56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsn9c" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.877055 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.877072 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.877093 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg78q\" (UniqueName: \"kubernetes.io/projected/94726f3c-782c-4f4c-89cc-60229b8f339a-kube-api-access-zg78q\") pod \"ingress-operator-5b745b69d9-hpxdc\" (UID: \"94726f3c-782c-4f4c-89cc-60229b8f339a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.877112 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0735aeec-55b6-4140-8c72-d11b656ddb07-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fs4g6\" (UID: \"0735aeec-55b6-4140-8c72-d11b656ddb07\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.877139 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbtkg\" (UniqueName: \"kubernetes.io/projected/3875ab05-c190-4557-a863-84b3c123fe26-kube-api-access-wbtkg\") pod \"catalog-operator-68c6474976-scvs4\" (UID: \"3875ab05-c190-4557-a863-84b3c123fe26\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.877157 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8269d7d3-678d-44d5-885e-c5716e8024d8-console-oauth-config\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.877175 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wq4f\" (UniqueName: \"kubernetes.io/projected/0c1c2a13-ee4c-4ced-9799-a1332e4e134f-kube-api-access-6wq4f\") pod \"machine-config-operator-74547568cd-n8hpb\" (UID: \"0c1c2a13-ee4c-4ced-9799-a1332e4e134f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.877191 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snbml\" (UniqueName: \"kubernetes.io/projected/2e87ef7d-a670-47ae-8a85-cfc07a848430-kube-api-access-snbml\") pod \"olm-operator-6b444d44fb-ksxk5\" (UID: \"2e87ef7d-a670-47ae-8a85-cfc07a848430\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.877252 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c91bd8e-040a-4961-8a7f-2fbeacff5b50-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fbwgg\" (UID: \"4c91bd8e-040a-4961-8a7f-2fbeacff5b50\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbwgg" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.877886 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.878187 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.878637 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a-serving-cert\") pod \"console-operator-58897d9998-vzrkt\" (UID: \"2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a\") " pod="openshift-console-operator/console-operator-58897d9998-vzrkt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.878671 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b21e7f91-3226-493e-bbfb-89b33296e74e-client-ca\") pod \"route-controller-manager-6576b87f9c-f5gx4\" (UID: \"b21e7f91-3226-493e-bbfb-89b33296e74e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.879893 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-service-ca\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.880749 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-89xb7"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.880801 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zsn9c"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.880907 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0c1c2a13-ee4c-4ced-9799-a1332e4e134f-images\") pod \"machine-config-operator-74547568cd-n8hpb\" (UID: \"0c1c2a13-ee4c-4ced-9799-a1332e4e134f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.881700 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/038469c2-c803-45d5-aaa5-d81663f41345-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.882028 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a537a695-5721-4eae-a5f7-6df14075f458-serving-cert\") pod \"authentication-operator-69f744f599-fzvnx\" (UID: \"a537a695-5721-4eae-a5f7-6df14075f458\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.882155 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-audit-policies\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.882489 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-oauth-serving-cert\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.883005 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.883485 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rlnfh\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.884804 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b21e7f91-3226-493e-bbfb-89b33296e74e-serving-cert\") pod \"route-controller-manager-6576b87f9c-f5gx4\" (UID: \"b21e7f91-3226-493e-bbfb-89b33296e74e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.885274 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.885295 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.885332 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a537a695-5721-4eae-a5f7-6df14075f458-config\") pod \"authentication-operator-69f744f599-fzvnx\" (UID: \"a537a695-5721-4eae-a5f7-6df14075f458\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.885453 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7ec10c36-d3de-409c-a3d6-3cde63c0b206-available-featuregates\") pod \"openshift-config-operator-7777fb866f-c8rpj\" (UID: \"7ec10c36-d3de-409c-a3d6-3cde63c0b206\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.885918 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4fd9b862-74de-4579-9b30-b51e5cbd3b56-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zsn9c\" (UID: \"4fd9b862-74de-4579-9b30-b51e5cbd3b56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsn9c" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.886150 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.886418 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/038469c2-c803-45d5-aaa5-d81663f41345-serving-cert\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.888024 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0c1c2a13-ee4c-4ced-9799-a1332e4e134f-proxy-tls\") pod \"machine-config-operator-74547568cd-n8hpb\" (UID: \"0c1c2a13-ee4c-4ced-9799-a1332e4e134f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.888160 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-trusted-ca-bundle\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.888544 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8269d7d3-678d-44d5-885e-c5716e8024d8-console-oauth-config\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.889276 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a-trusted-ca\") pod \"console-operator-58897d9998-vzrkt\" (UID: \"2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a\") " pod="openshift-console-operator/console-operator-58897d9998-vzrkt" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.889463 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1aeb70f5-e543-4f51-bcf7-605df435f80e-auth-proxy-config\") pod \"machine-approver-56656f9798-sbrtp\" (UID: \"1aeb70f5-e543-4f51-bcf7-605df435f80e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.889520 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a537a695-5721-4eae-a5f7-6df14075f458-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fzvnx\" (UID: \"a537a695-5721-4eae-a5f7-6df14075f458\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.890040 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0c1c2a13-ee4c-4ced-9799-a1332e4e134f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-n8hpb\" (UID: \"0c1c2a13-ee4c-4ced-9799-a1332e4e134f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.890090 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/038469c2-c803-45d5-aaa5-d81663f41345-etcd-client\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.890260 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-client-ca\") pod \"controller-manager-879f6c89f-rlnfh\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.890842 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/038469c2-c803-45d5-aaa5-d81663f41345-audit-policies\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.891130 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7129ebfc-8ee6-475d-81d7-dcc6a9d6a6e6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jfwgn\" (UID: \"7129ebfc-8ee6-475d-81d7-dcc6a9d6a6e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfwgn" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.891225 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/038469c2-c803-45d5-aaa5-d81663f41345-encryption-config\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.891905 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.891965 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ec10c36-d3de-409c-a3d6-3cde63c0b206-serving-cert\") pod \"openshift-config-operator-7777fb866f-c8rpj\" (UID: \"7ec10c36-d3de-409c-a3d6-3cde63c0b206\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.892385 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1aeb70f5-e543-4f51-bcf7-605df435f80e-machine-approver-tls\") pod \"machine-approver-56656f9798-sbrtp\" (UID: \"1aeb70f5-e543-4f51-bcf7-605df435f80e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.893079 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fl26p"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.893139 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b21e7f91-3226-493e-bbfb-89b33296e74e-config\") pod \"route-controller-manager-6576b87f9c-f5gx4\" (UID: \"b21e7f91-3226-493e-bbfb-89b33296e74e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.893836 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sgslp"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.893972 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aeb70f5-e543-4f51-bcf7-605df435f80e-config\") pod \"machine-approver-56656f9798-sbrtp\" (UID: \"1aeb70f5-e543-4f51-bcf7-605df435f80e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.894034 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c91bd8e-040a-4961-8a7f-2fbeacff5b50-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fbwgg\" (UID: \"4c91bd8e-040a-4961-8a7f-2fbeacff5b50\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbwgg" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.895786 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-config\") pod \"controller-manager-879f6c89f-rlnfh\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.897585 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl5ts"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.900648 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8269d7d3-678d-44d5-885e-c5716e8024d8-console-serving-cert\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.901049 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-b6r5v"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.901106 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.903136 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rtr85"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.905078 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xcs68"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.906991 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qltc7"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.907947 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.909182 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rtks2"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.910488 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.911786 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-r8j24"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.913107 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-5mxl2"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.914023 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5mxl2" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.914209 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fnd9b"] Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.920526 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.940794 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.960517 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.978789 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0b0b2321-3f0f-4889-acad-bb7b10f96043-signing-key\") pod \"service-ca-9c57cc56f-vvlfw\" (UID: \"0b0b2321-3f0f-4889-acad-bb7b10f96043\") " pod="openshift-service-ca/service-ca-9c57cc56f-vvlfw" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.978916 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e278457d-db19-47bc-a2a5-6ff0e994aace-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g8j2r\" (UID: \"e278457d-db19-47bc-a2a5-6ff0e994aace\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8j2r" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.979001 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0735aeec-55b6-4140-8c72-d11b656ddb07-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fs4g6\" (UID: \"0735aeec-55b6-4140-8c72-d11b656ddb07\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.979079 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3875ab05-c190-4557-a863-84b3c123fe26-srv-cert\") pod \"catalog-operator-68c6474976-scvs4\" (UID: \"3875ab05-c190-4557-a863-84b3c123fe26\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.979160 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt8hp\" (UniqueName: \"kubernetes.io/projected/f27f4e56-71ef-43e6-be78-20759a8e9ed5-kube-api-access-tt8hp\") pod \"machine-config-controller-84d6567774-6zcv5\" (UID: \"f27f4e56-71ef-43e6-be78-20759a8e9ed5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6zcv5" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.979235 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2e87ef7d-a670-47ae-8a85-cfc07a848430-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ksxk5\" (UID: \"2e87ef7d-a670-47ae-8a85-cfc07a848430\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.979345 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rcd7\" (UniqueName: \"kubernetes.io/projected/49ce2590-a0c6-4e75-af35-73bb211e6829-kube-api-access-4rcd7\") pod \"dns-operator-744455d44c-75rtp\" (UID: \"49ce2590-a0c6-4e75-af35-73bb211e6829\") " pod="openshift-dns-operator/dns-operator-744455d44c-75rtp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.979416 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-875vr\" (UniqueName: \"kubernetes.io/projected/43fa0cde-7ba5-4788-be26-1170bf6ee75d-kube-api-access-875vr\") pod \"multus-admission-controller-857f4d67dd-7fzwr\" (UID: \"43fa0cde-7ba5-4788-be26-1170bf6ee75d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7fzwr" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.979499 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/49ce2590-a0c6-4e75-af35-73bb211e6829-metrics-tls\") pod \"dns-operator-744455d44c-75rtp\" (UID: \"49ce2590-a0c6-4e75-af35-73bb211e6829\") " pod="openshift-dns-operator/dns-operator-744455d44c-75rtp" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.979573 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e87ef7d-a670-47ae-8a85-cfc07a848430-srv-cert\") pod \"olm-operator-6b444d44fb-ksxk5\" (UID: \"2e87ef7d-a670-47ae-8a85-cfc07a848430\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.979688 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99n86\" (UniqueName: \"kubernetes.io/projected/b428addf-b196-461c-aaaf-7b9b14848a6c-kube-api-access-99n86\") pod \"downloads-7954f5f757-5rkhb\" (UID: \"b428addf-b196-461c-aaaf-7b9b14848a6c\") " pod="openshift-console/downloads-7954f5f757-5rkhb" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.979815 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krlfl\" (UniqueName: \"kubernetes.io/projected/0735aeec-55b6-4140-8c72-d11b656ddb07-kube-api-access-krlfl\") pod \"cluster-image-registry-operator-dc59b4c8b-fs4g6\" (UID: \"0735aeec-55b6-4140-8c72-d11b656ddb07\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.979920 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f27f4e56-71ef-43e6-be78-20759a8e9ed5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6zcv5\" (UID: \"f27f4e56-71ef-43e6-be78-20759a8e9ed5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6zcv5" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.980007 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e278457d-db19-47bc-a2a5-6ff0e994aace-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g8j2r\" (UID: \"e278457d-db19-47bc-a2a5-6ff0e994aace\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8j2r" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.980379 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3875ab05-c190-4557-a863-84b3c123fe26-profile-collector-cert\") pod \"catalog-operator-68c6474976-scvs4\" (UID: \"3875ab05-c190-4557-a863-84b3c123fe26\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.980489 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0b0b2321-3f0f-4889-acad-bb7b10f96043-signing-cabundle\") pod \"service-ca-9c57cc56f-vvlfw\" (UID: \"0b0b2321-3f0f-4889-acad-bb7b10f96043\") " pod="openshift-service-ca/service-ca-9c57cc56f-vvlfw" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.980570 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6d85\" (UniqueName: \"kubernetes.io/projected/71551b91-3a04-4dcd-9a94-e96b4663b040-kube-api-access-n6d85\") pod \"package-server-manager-789f6589d5-pmxvg\" (UID: \"71551b91-3a04-4dcd-9a94-e96b4663b040\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pmxvg" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.980677 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64jrz\" (UniqueName: \"kubernetes.io/projected/0b0b2321-3f0f-4889-acad-bb7b10f96043-kube-api-access-64jrz\") pod \"service-ca-9c57cc56f-vvlfw\" (UID: \"0b0b2321-3f0f-4889-acad-bb7b10f96043\") " pod="openshift-service-ca/service-ca-9c57cc56f-vvlfw" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.980767 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb5bn\" (UniqueName: \"kubernetes.io/projected/e278457d-db19-47bc-a2a5-6ff0e994aace-kube-api-access-cb5bn\") pod \"openshift-controller-manager-operator-756b6f6bc6-g8j2r\" (UID: \"e278457d-db19-47bc-a2a5-6ff0e994aace\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8j2r" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.980839 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/43fa0cde-7ba5-4788-be26-1170bf6ee75d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7fzwr\" (UID: \"43fa0cde-7ba5-4788-be26-1170bf6ee75d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7fzwr" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.981017 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0735aeec-55b6-4140-8c72-d11b656ddb07-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fs4g6\" (UID: \"0735aeec-55b6-4140-8c72-d11b656ddb07\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.981293 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/71551b91-3a04-4dcd-9a94-e96b4663b040-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pmxvg\" (UID: \"71551b91-3a04-4dcd-9a94-e96b4663b040\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pmxvg" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.981379 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f27f4e56-71ef-43e6-be78-20759a8e9ed5-proxy-tls\") pod \"machine-config-controller-84d6567774-6zcv5\" (UID: \"f27f4e56-71ef-43e6-be78-20759a8e9ed5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6zcv5" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.981472 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0735aeec-55b6-4140-8c72-d11b656ddb07-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fs4g6\" (UID: \"0735aeec-55b6-4140-8c72-d11b656ddb07\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.980635 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.981631 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbtkg\" (UniqueName: \"kubernetes.io/projected/3875ab05-c190-4557-a863-84b3c123fe26-kube-api-access-wbtkg\") pod \"catalog-operator-68c6474976-scvs4\" (UID: \"3875ab05-c190-4557-a863-84b3c123fe26\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.982703 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2e87ef7d-a670-47ae-8a85-cfc07a848430-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ksxk5\" (UID: \"2e87ef7d-a670-47ae-8a85-cfc07a848430\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.982774 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snbml\" (UniqueName: \"kubernetes.io/projected/2e87ef7d-a670-47ae-8a85-cfc07a848430-kube-api-access-snbml\") pod \"olm-operator-6b444d44fb-ksxk5\" (UID: \"2e87ef7d-a670-47ae-8a85-cfc07a848430\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.982824 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e278457d-db19-47bc-a2a5-6ff0e994aace-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g8j2r\" (UID: \"e278457d-db19-47bc-a2a5-6ff0e994aace\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8j2r" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.983752 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e278457d-db19-47bc-a2a5-6ff0e994aace-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g8j2r\" (UID: \"e278457d-db19-47bc-a2a5-6ff0e994aace\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8j2r" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.984374 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3875ab05-c190-4557-a863-84b3c123fe26-srv-cert\") pod \"catalog-operator-68c6474976-scvs4\" (UID: \"3875ab05-c190-4557-a863-84b3c123fe26\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.984584 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0735aeec-55b6-4140-8c72-d11b656ddb07-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fs4g6\" (UID: \"0735aeec-55b6-4140-8c72-d11b656ddb07\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.985414 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3875ab05-c190-4557-a863-84b3c123fe26-profile-collector-cert\") pod \"catalog-operator-68c6474976-scvs4\" (UID: \"3875ab05-c190-4557-a863-84b3c123fe26\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4" Jan 26 12:46:18 crc kubenswrapper[4844]: I0126 12:46:18.985616 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0735aeec-55b6-4140-8c72-d11b656ddb07-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fs4g6\" (UID: \"0735aeec-55b6-4140-8c72-d11b656ddb07\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.000820 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.021247 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.041144 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.061299 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.065860 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f27f4e56-71ef-43e6-be78-20759a8e9ed5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6zcv5\" (UID: \"f27f4e56-71ef-43e6-be78-20759a8e9ed5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6zcv5" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.068261 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.070027 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.070194 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.070533 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.081268 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.101104 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.104878 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/43fa0cde-7ba5-4788-be26-1170bf6ee75d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7fzwr\" (UID: \"43fa0cde-7ba5-4788-be26-1170bf6ee75d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7fzwr" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.121252 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.124753 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/71551b91-3a04-4dcd-9a94-e96b4663b040-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pmxvg\" (UID: \"71551b91-3a04-4dcd-9a94-e96b4663b040\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pmxvg" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.140277 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.147193 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f27f4e56-71ef-43e6-be78-20759a8e9ed5-proxy-tls\") pod \"machine-config-controller-84d6567774-6zcv5\" (UID: \"f27f4e56-71ef-43e6-be78-20759a8e9ed5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6zcv5" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.161093 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.181163 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.201114 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.215273 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e87ef7d-a670-47ae-8a85-cfc07a848430-srv-cert\") pod \"olm-operator-6b444d44fb-ksxk5\" (UID: \"2e87ef7d-a670-47ae-8a85-cfc07a848430\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.223132 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.241416 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.261127 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.274709 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/49ce2590-a0c6-4e75-af35-73bb211e6829-metrics-tls\") pod \"dns-operator-744455d44c-75rtp\" (UID: \"49ce2590-a0c6-4e75-af35-73bb211e6829\") " pod="openshift-dns-operator/dns-operator-744455d44c-75rtp" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.281255 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.301716 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.323055 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.341469 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.354989 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0b0b2321-3f0f-4889-acad-bb7b10f96043-signing-key\") pod \"service-ca-9c57cc56f-vvlfw\" (UID: \"0b0b2321-3f0f-4889-acad-bb7b10f96043\") " pod="openshift-service-ca/service-ca-9c57cc56f-vvlfw" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.362057 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.382979 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.392724 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0b0b2321-3f0f-4889-acad-bb7b10f96043-signing-cabundle\") pod \"service-ca-9c57cc56f-vvlfw\" (UID: \"0b0b2321-3f0f-4889-acad-bb7b10f96043\") " pod="openshift-service-ca/service-ca-9c57cc56f-vvlfw" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.401933 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.420969 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.440393 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.461572 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.480869 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.501084 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.522537 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.541313 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.562868 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.582479 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.619086 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98vsv\" (UniqueName: \"kubernetes.io/projected/45322811-c744-4cce-a307-088c0bc3965a-kube-api-access-98vsv\") pod \"apiserver-76f77b778f-rtks2\" (UID: \"45322811-c744-4cce-a307-088c0bc3965a\") " pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.661525 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.681872 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.701869 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.729293 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.741815 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.762492 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.777064 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.779319 4844 request.go:700] Waited for 1.001836937s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.780713 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.802197 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.823220 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.841827 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.861645 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.881214 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.902342 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.921647 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.942324 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.962240 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 12:46:19 crc kubenswrapper[4844]: I0126 12:46:19.982004 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.001565 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.005275 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rtks2"] Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.021444 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.042479 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.061242 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.064791 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rtks2" event={"ID":"45322811-c744-4cce-a307-088c0bc3965a","Type":"ContainerStarted","Data":"82e623f0020e134a079554caad41d1855b25e349fee6273ed8031432eb8357e5"} Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.080767 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.100875 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.120654 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.142057 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.162091 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.181980 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.201443 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.220673 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.243432 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.261429 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.281004 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.300745 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.321977 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.341991 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.360952 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.381516 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.401242 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.422240 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.441711 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.461951 4844 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.482474 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.501214 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.520997 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.542267 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.582504 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdgmr\" (UniqueName: \"kubernetes.io/projected/7ec10c36-d3de-409c-a3d6-3cde63c0b206-kube-api-access-pdgmr\") pod \"openshift-config-operator-7777fb866f-c8rpj\" (UID: \"7ec10c36-d3de-409c-a3d6-3cde63c0b206\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.596963 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bcgw\" (UniqueName: \"kubernetes.io/projected/e6a96cc6-703f-4104-8ff8-53c3cafb2227-kube-api-access-6bcgw\") pod \"oauth-openshift-558db77b4-fmk5t\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.615317 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhghl\" (UniqueName: \"kubernetes.io/projected/2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a-kube-api-access-xhghl\") pod \"console-operator-58897d9998-vzrkt\" (UID: \"2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a\") " pod="openshift-console-operator/console-operator-58897d9998-vzrkt" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.636040 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rstlz\" (UniqueName: \"kubernetes.io/projected/4fd9b862-74de-4579-9b30-b51e5cbd3b56-kube-api-access-rstlz\") pod \"machine-api-operator-5694c8668f-zsn9c\" (UID: \"4fd9b862-74de-4579-9b30-b51e5cbd3b56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsn9c" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.656215 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.657217 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2dgs\" (UniqueName: \"kubernetes.io/projected/8269d7d3-678d-44d5-885e-c5716e8024d8-kube-api-access-p2dgs\") pod \"console-f9d7485db-vhsn2\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.675055 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8h5r\" (UniqueName: \"kubernetes.io/projected/a537a695-5721-4eae-a5f7-6df14075f458-kube-api-access-q8h5r\") pod \"authentication-operator-69f744f599-fzvnx\" (UID: \"a537a695-5721-4eae-a5f7-6df14075f458\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.697318 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q44z9\" (UniqueName: \"kubernetes.io/projected/7129ebfc-8ee6-475d-81d7-dcc6a9d6a6e6-kube-api-access-q44z9\") pod \"cluster-samples-operator-665b6dd947-jfwgn\" (UID: \"7129ebfc-8ee6-475d-81d7-dcc6a9d6a6e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfwgn" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.704764 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.723905 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg78q\" (UniqueName: \"kubernetes.io/projected/94726f3c-782c-4f4c-89cc-60229b8f339a-kube-api-access-zg78q\") pod \"ingress-operator-5b745b69d9-hpxdc\" (UID: \"94726f3c-782c-4f4c-89cc-60229b8f339a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.725481 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfwgn" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.746619 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqfz5\" (UniqueName: \"kubernetes.io/projected/038469c2-c803-45d5-aaa5-d81663f41345-kube-api-access-sqfz5\") pod \"apiserver-7bbb656c7d-j9vvp\" (UID: \"038469c2-c803-45d5-aaa5-d81663f41345\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.762155 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wq4f\" (UniqueName: \"kubernetes.io/projected/0c1c2a13-ee4c-4ced-9799-a1332e4e134f-kube-api-access-6wq4f\") pod \"machine-config-operator-74547568cd-n8hpb\" (UID: \"0c1c2a13-ee4c-4ced-9799-a1332e4e134f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.777058 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vzrkt" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.783692 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf82h\" (UniqueName: \"kubernetes.io/projected/4c91bd8e-040a-4961-8a7f-2fbeacff5b50-kube-api-access-jf82h\") pod \"openshift-apiserver-operator-796bbdcf4f-fbwgg\" (UID: \"4c91bd8e-040a-4961-8a7f-2fbeacff5b50\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbwgg" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.800084 4844 request.go:700] Waited for 1.910329535s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.802397 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgq6v\" (UniqueName: \"kubernetes.io/projected/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-kube-api-access-sgq6v\") pod \"controller-manager-879f6c89f-rlnfh\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.817155 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.820300 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh9st\" (UniqueName: \"kubernetes.io/projected/1aeb70f5-e543-4f51-bcf7-605df435f80e-kube-api-access-wh9st\") pod \"machine-approver-56656f9798-sbrtp\" (UID: \"1aeb70f5-e543-4f51-bcf7-605df435f80e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.835417 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn87w\" (UniqueName: \"kubernetes.io/projected/b21e7f91-3226-493e-bbfb-89b33296e74e-kube-api-access-mn87w\") pod \"route-controller-manager-6576b87f9c-f5gx4\" (UID: \"b21e7f91-3226-493e-bbfb-89b33296e74e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.856634 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj"] Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.857936 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94726f3c-782c-4f4c-89cc-60229b8f339a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hpxdc\" (UID: \"94726f3c-782c-4f4c-89cc-60229b8f339a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.861434 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.872630 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zsn9c" Jan 26 12:46:20 crc kubenswrapper[4844]: W0126 12:46:20.872804 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ec10c36_d3de_409c_a3d6_3cde63c0b206.slice/crio-c0177449298a4c0a5474aa8da52f89bddd91a9dbe61f4a65f5c2a1146ea32d63 WatchSource:0}: Error finding container c0177449298a4c0a5474aa8da52f89bddd91a9dbe61f4a65f5c2a1146ea32d63: Status 404 returned error can't find the container with id c0177449298a4c0a5474aa8da52f89bddd91a9dbe61f4a65f5c2a1146ea32d63 Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.879621 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.881978 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.901415 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.934328 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt8hp\" (UniqueName: \"kubernetes.io/projected/f27f4e56-71ef-43e6-be78-20759a8e9ed5-kube-api-access-tt8hp\") pod \"machine-config-controller-84d6567774-6zcv5\" (UID: \"f27f4e56-71ef-43e6-be78-20759a8e9ed5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6zcv5" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.940337 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.947023 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.957488 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rcd7\" (UniqueName: \"kubernetes.io/projected/49ce2590-a0c6-4e75-af35-73bb211e6829-kube-api-access-4rcd7\") pod \"dns-operator-744455d44c-75rtp\" (UID: \"49ce2590-a0c6-4e75-af35-73bb211e6829\") " pod="openshift-dns-operator/dns-operator-744455d44c-75rtp" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.969359 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.974340 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-875vr\" (UniqueName: \"kubernetes.io/projected/43fa0cde-7ba5-4788-be26-1170bf6ee75d-kube-api-access-875vr\") pod \"multus-admission-controller-857f4d67dd-7fzwr\" (UID: \"43fa0cde-7ba5-4788-be26-1170bf6ee75d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7fzwr" Jan 26 12:46:20 crc kubenswrapper[4844]: I0126 12:46:20.993971 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfwgn"] Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.003729 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.004540 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0735aeec-55b6-4140-8c72-d11b656ddb07-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fs4g6\" (UID: \"0735aeec-55b6-4140-8c72-d11b656ddb07\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6" Jan 26 12:46:21 crc kubenswrapper[4844]: W0126 12:46:21.006072 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1aeb70f5_e543_4f51_bcf7_605df435f80e.slice/crio-b3f20d4eab2a7c8de18425ca6869c0e0ee2be932f1ae4df9eff1ee21e027e878 WatchSource:0}: Error finding container b3f20d4eab2a7c8de18425ca6869c0e0ee2be932f1ae4df9eff1ee21e027e878: Status 404 returned error can't find the container with id b3f20d4eab2a7c8de18425ca6869c0e0ee2be932f1ae4df9eff1ee21e027e878 Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.019486 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krlfl\" (UniqueName: \"kubernetes.io/projected/0735aeec-55b6-4140-8c72-d11b656ddb07-kube-api-access-krlfl\") pod \"cluster-image-registry-operator-dc59b4c8b-fs4g6\" (UID: \"0735aeec-55b6-4140-8c72-d11b656ddb07\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.030347 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7fzwr" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.043838 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbwgg" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.044333 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6zcv5" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.046683 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99n86\" (UniqueName: \"kubernetes.io/projected/b428addf-b196-461c-aaaf-7b9b14848a6c-kube-api-access-99n86\") pod \"downloads-7954f5f757-5rkhb\" (UID: \"b428addf-b196-461c-aaaf-7b9b14848a6c\") " pod="openshift-console/downloads-7954f5f757-5rkhb" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.057543 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64jrz\" (UniqueName: \"kubernetes.io/projected/0b0b2321-3f0f-4889-acad-bb7b10f96043-kube-api-access-64jrz\") pod \"service-ca-9c57cc56f-vvlfw\" (UID: \"0b0b2321-3f0f-4889-acad-bb7b10f96043\") " pod="openshift-service-ca/service-ca-9c57cc56f-vvlfw" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.057846 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-75rtp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.063397 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vvlfw" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.064547 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.075773 4844 generic.go:334] "Generic (PLEG): container finished" podID="45322811-c744-4cce-a307-088c0bc3965a" containerID="addefcc9adbdbb4ac004e5aded586bbdbac3af814b179fc7963fe6543fcc2038" exitCode=0 Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.075856 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rtks2" event={"ID":"45322811-c744-4cce-a307-088c0bc3965a","Type":"ContainerDied","Data":"addefcc9adbdbb4ac004e5aded586bbdbac3af814b179fc7963fe6543fcc2038"} Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.076407 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6d85\" (UniqueName: \"kubernetes.io/projected/71551b91-3a04-4dcd-9a94-e96b4663b040-kube-api-access-n6d85\") pod \"package-server-manager-789f6589d5-pmxvg\" (UID: \"71551b91-3a04-4dcd-9a94-e96b4663b040\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pmxvg" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.077731 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp" event={"ID":"1aeb70f5-e543-4f51-bcf7-605df435f80e","Type":"ContainerStarted","Data":"b3f20d4eab2a7c8de18425ca6869c0e0ee2be932f1ae4df9eff1ee21e027e878"} Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.083683 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj" event={"ID":"7ec10c36-d3de-409c-a3d6-3cde63c0b206","Type":"ContainerStarted","Data":"2660012a3614c63d5e8ab134f8bf002ea031370adc32d60a6ad419c0b479bde1"} Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.083742 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj" event={"ID":"7ec10c36-d3de-409c-a3d6-3cde63c0b206","Type":"ContainerStarted","Data":"c0177449298a4c0a5474aa8da52f89bddd91a9dbe61f4a65f5c2a1146ea32d63"} Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.109946 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.111463 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb5bn\" (UniqueName: \"kubernetes.io/projected/e278457d-db19-47bc-a2a5-6ff0e994aace-kube-api-access-cb5bn\") pod \"openshift-controller-manager-operator-756b6f6bc6-g8j2r\" (UID: \"e278457d-db19-47bc-a2a5-6ff0e994aace\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8j2r" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.133205 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zsn9c"] Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.136196 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snbml\" (UniqueName: \"kubernetes.io/projected/2e87ef7d-a670-47ae-8a85-cfc07a848430-kube-api-access-snbml\") pod \"olm-operator-6b444d44fb-ksxk5\" (UID: \"2e87ef7d-a670-47ae-8a85-cfc07a848430\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.139751 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbtkg\" (UniqueName: \"kubernetes.io/projected/3875ab05-c190-4557-a863-84b3c123fe26-kube-api-access-wbtkg\") pod \"catalog-operator-68c6474976-scvs4\" (UID: \"3875ab05-c190-4557-a863-84b3c123fe26\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.175141 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vhsn2"] Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.190103 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmk5t"] Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224071 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9fnj\" (UniqueName: \"kubernetes.io/projected/46a01ba7-7357-471a-ae59-95361f2ce7ba-kube-api-access-q9fnj\") pod \"router-default-5444994796-9pkgp\" (UID: \"46a01ba7-7357-471a-ae59-95361f2ce7ba\") " pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224147 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmfkx\" (UniqueName: \"kubernetes.io/projected/85096fe3-8ab7-45f9-8ae7-c36ff77a7333-kube-api-access-gmfkx\") pod \"etcd-operator-b45778765-89xb7\" (UID: \"85096fe3-8ab7-45f9-8ae7-c36ff77a7333\") " pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224204 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e17e004d-fb45-4c4f-896f-6f650a0f7379-bound-sa-token\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224232 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f3783e9-776b-434b-8298-59283076969f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9cmnk\" (UID: \"8f3783e9-776b-434b-8298-59283076969f\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224290 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e17e004d-fb45-4c4f-896f-6f650a0f7379-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224310 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/46a01ba7-7357-471a-ae59-95361f2ce7ba-default-certificate\") pod \"router-default-5444994796-9pkgp\" (UID: \"46a01ba7-7357-471a-ae59-95361f2ce7ba\") " pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224381 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224426 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e17e004d-fb45-4c4f-896f-6f650a0f7379-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224454 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e17e004d-fb45-4c4f-896f-6f650a0f7379-trusted-ca\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224507 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85096fe3-8ab7-45f9-8ae7-c36ff77a7333-serving-cert\") pod \"etcd-operator-b45778765-89xb7\" (UID: \"85096fe3-8ab7-45f9-8ae7-c36ff77a7333\") " pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224532 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwq8k\" (UniqueName: \"kubernetes.io/projected/e17e004d-fb45-4c4f-896f-6f650a0f7379-kube-api-access-wwq8k\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224648 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e17e004d-fb45-4c4f-896f-6f650a0f7379-registry-certificates\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224686 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f3783e9-776b-434b-8298-59283076969f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9cmnk\" (UID: \"8f3783e9-776b-434b-8298-59283076969f\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224709 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/85096fe3-8ab7-45f9-8ae7-c36ff77a7333-etcd-ca\") pod \"etcd-operator-b45778765-89xb7\" (UID: \"85096fe3-8ab7-45f9-8ae7-c36ff77a7333\") " pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224733 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46a01ba7-7357-471a-ae59-95361f2ce7ba-service-ca-bundle\") pod \"router-default-5444994796-9pkgp\" (UID: \"46a01ba7-7357-471a-ae59-95361f2ce7ba\") " pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224754 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/46a01ba7-7357-471a-ae59-95361f2ce7ba-stats-auth\") pod \"router-default-5444994796-9pkgp\" (UID: \"46a01ba7-7357-471a-ae59-95361f2ce7ba\") " pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224798 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl4s4\" (UniqueName: \"kubernetes.io/projected/8f3783e9-776b-434b-8298-59283076969f-kube-api-access-tl4s4\") pod \"marketplace-operator-79b997595-9cmnk\" (UID: \"8f3783e9-776b-434b-8298-59283076969f\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224845 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/85096fe3-8ab7-45f9-8ae7-c36ff77a7333-etcd-client\") pod \"etcd-operator-b45778765-89xb7\" (UID: \"85096fe3-8ab7-45f9-8ae7-c36ff77a7333\") " pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224893 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46a01ba7-7357-471a-ae59-95361f2ce7ba-metrics-certs\") pod \"router-default-5444994796-9pkgp\" (UID: \"46a01ba7-7357-471a-ae59-95361f2ce7ba\") " pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224926 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85096fe3-8ab7-45f9-8ae7-c36ff77a7333-config\") pod \"etcd-operator-b45778765-89xb7\" (UID: \"85096fe3-8ab7-45f9-8ae7-c36ff77a7333\") " pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224952 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/85096fe3-8ab7-45f9-8ae7-c36ff77a7333-etcd-service-ca\") pod \"etcd-operator-b45778765-89xb7\" (UID: \"85096fe3-8ab7-45f9-8ae7-c36ff77a7333\") " pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.224991 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e17e004d-fb45-4c4f-896f-6f650a0f7379-registry-tls\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: E0126 12:46:21.226158 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:21.726143338 +0000 UTC m=+158.659511030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.255333 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vzrkt"] Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.256482 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb"] Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.262121 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8j2r" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.267796 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp"] Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.278722 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.317259 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.324702 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5rkhb" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.325923 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.326272 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7dsf\" (UniqueName: \"kubernetes.io/projected/516355b9-6e51-4a48-8583-0529c3f53013-kube-api-access-g7dsf\") pod \"packageserver-d55dfcdfc-qbpjx\" (UID: \"516355b9-6e51-4a48-8583-0529c3f53013\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" Jan 26 12:46:21 crc kubenswrapper[4844]: E0126 12:46:21.326423 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:21.826403251 +0000 UTC m=+158.759770863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.326628 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/516355b9-6e51-4a48-8583-0529c3f53013-webhook-cert\") pod \"packageserver-d55dfcdfc-qbpjx\" (UID: \"516355b9-6e51-4a48-8583-0529c3f53013\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.326683 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.326724 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d864ad06-5a3e-4f38-a16a-22de2e50ce8c-metrics-tls\") pod \"dns-default-r8j24\" (UID: \"d864ad06-5a3e-4f38-a16a-22de2e50ce8c\") " pod="openshift-dns/dns-default-r8j24" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.326748 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjkpt\" (UniqueName: \"kubernetes.io/projected/1176f79a-2455-49f3-b11a-faf502559c52-kube-api-access-cjkpt\") pod \"service-ca-operator-777779d784-fl26p\" (UID: \"1176f79a-2455-49f3-b11a-faf502559c52\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl26p" Jan 26 12:46:21 crc kubenswrapper[4844]: E0126 12:46:21.327070 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:21.827054208 +0000 UTC m=+158.760421820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.327168 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e17e004d-fb45-4c4f-896f-6f650a0f7379-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.327227 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e17e004d-fb45-4c4f-896f-6f650a0f7379-trusted-ca\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.327251 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85096fe3-8ab7-45f9-8ae7-c36ff77a7333-serving-cert\") pod \"etcd-operator-b45778765-89xb7\" (UID: \"85096fe3-8ab7-45f9-8ae7-c36ff77a7333\") " pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.327278 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ctks\" (UniqueName: \"kubernetes.io/projected/10b7b789-0c46-4e84-875e-f74c68981bca-kube-api-access-9ctks\") pod \"control-plane-machine-set-operator-78cbb6b69f-qltc7\" (UID: \"10b7b789-0c46-4e84-875e-f74c68981bca\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qltc7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.327319 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml5jt\" (UniqueName: \"kubernetes.io/projected/d864ad06-5a3e-4f38-a16a-22de2e50ce8c-kube-api-access-ml5jt\") pod \"dns-default-r8j24\" (UID: \"d864ad06-5a3e-4f38-a16a-22de2e50ce8c\") " pod="openshift-dns/dns-default-r8j24" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.327410 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c710b8-69dd-49d7-8606-85bc4a4899ca-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rtr85\" (UID: \"96c710b8-69dd-49d7-8606-85bc4a4899ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rtr85" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.327431 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwq8k\" (UniqueName: \"kubernetes.io/projected/e17e004d-fb45-4c4f-896f-6f650a0f7379-kube-api-access-wwq8k\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.327475 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8cccdbda-6833-4c8f-b709-ab1f617e2153-socket-dir\") pod \"csi-hostpathplugin-b6r5v\" (UID: \"8cccdbda-6833-4c8f-b709-ab1f617e2153\") " pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.327504 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7973e4fa-99bd-46f3-bf39-8c9e7209e788-certs\") pod \"machine-config-server-5mxl2\" (UID: \"7973e4fa-99bd-46f3-bf39-8c9e7209e788\") " pod="openshift-machine-config-operator/machine-config-server-5mxl2" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.328933 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96c710b8-69dd-49d7-8606-85bc4a4899ca-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rtr85\" (UID: \"96c710b8-69dd-49d7-8606-85bc4a4899ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rtr85" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.328963 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11aed539-3a79-4f8a-bba3-e2839ccf0d41-cert\") pod \"ingress-canary-fnd9b\" (UID: \"11aed539-3a79-4f8a-bba3-e2839ccf0d41\") " pod="openshift-ingress-canary/ingress-canary-fnd9b" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.329072 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b95a697-eeb9-444d-83ed-3484a41f5dd1-config-volume\") pod \"collect-profiles-29490525-mqbpl\" (UID: \"0b95a697-eeb9-444d-83ed-3484a41f5dd1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.329090 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k7dt\" (UniqueName: \"kubernetes.io/projected/0b95a697-eeb9-444d-83ed-3484a41f5dd1-kube-api-access-5k7dt\") pod \"collect-profiles-29490525-mqbpl\" (UID: \"0b95a697-eeb9-444d-83ed-3484a41f5dd1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.329108 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e17e004d-fb45-4c4f-896f-6f650a0f7379-registry-certificates\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.329124 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhqck\" (UniqueName: \"kubernetes.io/projected/8cccdbda-6833-4c8f-b709-ab1f617e2153-kube-api-access-vhqck\") pod \"csi-hostpathplugin-b6r5v\" (UID: \"8cccdbda-6833-4c8f-b709-ab1f617e2153\") " pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.329673 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e17e004d-fb45-4c4f-896f-6f650a0f7379-trusted-ca\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.332025 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e17e004d-fb45-4c4f-896f-6f650a0f7379-registry-certificates\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.334208 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f3783e9-776b-434b-8298-59283076969f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9cmnk\" (UID: \"8f3783e9-776b-434b-8298-59283076969f\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.334277 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8cccdbda-6833-4c8f-b709-ab1f617e2153-registration-dir\") pod \"csi-hostpathplugin-b6r5v\" (UID: \"8cccdbda-6833-4c8f-b709-ab1f617e2153\") " pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.335387 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/85096fe3-8ab7-45f9-8ae7-c36ff77a7333-etcd-ca\") pod \"etcd-operator-b45778765-89xb7\" (UID: \"85096fe3-8ab7-45f9-8ae7-c36ff77a7333\") " pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.335430 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46a01ba7-7357-471a-ae59-95361f2ce7ba-service-ca-bundle\") pod \"router-default-5444994796-9pkgp\" (UID: \"46a01ba7-7357-471a-ae59-95361f2ce7ba\") " pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.335458 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/46a01ba7-7357-471a-ae59-95361f2ce7ba-stats-auth\") pod \"router-default-5444994796-9pkgp\" (UID: \"46a01ba7-7357-471a-ae59-95361f2ce7ba\") " pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.335503 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-642s2\" (UniqueName: \"kubernetes.io/projected/11aed539-3a79-4f8a-bba3-e2839ccf0d41-kube-api-access-642s2\") pod \"ingress-canary-fnd9b\" (UID: \"11aed539-3a79-4f8a-bba3-e2839ccf0d41\") " pod="openshift-ingress-canary/ingress-canary-fnd9b" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.335543 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt5l2\" (UniqueName: \"kubernetes.io/projected/96c710b8-69dd-49d7-8606-85bc4a4899ca-kube-api-access-mt5l2\") pod \"kube-storage-version-migrator-operator-b67b599dd-rtr85\" (UID: \"96c710b8-69dd-49d7-8606-85bc4a4899ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rtr85" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.335568 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1176f79a-2455-49f3-b11a-faf502559c52-serving-cert\") pod \"service-ca-operator-777779d784-fl26p\" (UID: \"1176f79a-2455-49f3-b11a-faf502559c52\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl26p" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.335610 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/10b7b789-0c46-4e84-875e-f74c68981bca-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qltc7\" (UID: \"10b7b789-0c46-4e84-875e-f74c68981bca\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qltc7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.335664 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl4s4\" (UniqueName: \"kubernetes.io/projected/8f3783e9-776b-434b-8298-59283076969f-kube-api-access-tl4s4\") pod \"marketplace-operator-79b997595-9cmnk\" (UID: \"8f3783e9-776b-434b-8298-59283076969f\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.335687 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8cccdbda-6833-4c8f-b709-ab1f617e2153-plugins-dir\") pod \"csi-hostpathplugin-b6r5v\" (UID: \"8cccdbda-6833-4c8f-b709-ab1f617e2153\") " pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.335710 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca63e8a3-b015-4b94-95bc-5c3cdda81f88-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xcs68\" (UID: \"ca63e8a3-b015-4b94-95bc-5c3cdda81f88\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xcs68" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.335759 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/85096fe3-8ab7-45f9-8ae7-c36ff77a7333-etcd-client\") pod \"etcd-operator-b45778765-89xb7\" (UID: \"85096fe3-8ab7-45f9-8ae7-c36ff77a7333\") " pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.335782 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1176f79a-2455-49f3-b11a-faf502559c52-config\") pod \"service-ca-operator-777779d784-fl26p\" (UID: \"1176f79a-2455-49f3-b11a-faf502559c52\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl26p" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.335851 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/516355b9-6e51-4a48-8583-0529c3f53013-apiservice-cert\") pod \"packageserver-d55dfcdfc-qbpjx\" (UID: \"516355b9-6e51-4a48-8583-0529c3f53013\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.335880 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46a01ba7-7357-471a-ae59-95361f2ce7ba-metrics-certs\") pod \"router-default-5444994796-9pkgp\" (UID: \"46a01ba7-7357-471a-ae59-95361f2ce7ba\") " pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.335944 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85096fe3-8ab7-45f9-8ae7-c36ff77a7333-config\") pod \"etcd-operator-b45778765-89xb7\" (UID: \"85096fe3-8ab7-45f9-8ae7-c36ff77a7333\") " pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.335966 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08695a3d-343d-4425-bae7-186f1dcb9a0d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jl5ts\" (UID: \"08695a3d-343d-4425-bae7-186f1dcb9a0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl5ts" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336028 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca63e8a3-b015-4b94-95bc-5c3cdda81f88-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xcs68\" (UID: \"ca63e8a3-b015-4b94-95bc-5c3cdda81f88\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xcs68" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336061 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b95a697-eeb9-444d-83ed-3484a41f5dd1-secret-volume\") pod \"collect-profiles-29490525-mqbpl\" (UID: \"0b95a697-eeb9-444d-83ed-3484a41f5dd1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336083 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8cn2\" (UniqueName: \"kubernetes.io/projected/3f2d657c-0a0d-4671-a720-ef689ccf2120-kube-api-access-w8cn2\") pod \"migrator-59844c95c7-sgslp\" (UID: \"3f2d657c-0a0d-4671-a720-ef689ccf2120\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sgslp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336143 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/85096fe3-8ab7-45f9-8ae7-c36ff77a7333-etcd-service-ca\") pod \"etcd-operator-b45778765-89xb7\" (UID: \"85096fe3-8ab7-45f9-8ae7-c36ff77a7333\") " pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336184 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8cccdbda-6833-4c8f-b709-ab1f617e2153-mountpoint-dir\") pod \"csi-hostpathplugin-b6r5v\" (UID: \"8cccdbda-6833-4c8f-b709-ab1f617e2153\") " pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336268 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7973e4fa-99bd-46f3-bf39-8c9e7209e788-node-bootstrap-token\") pod \"machine-config-server-5mxl2\" (UID: \"7973e4fa-99bd-46f3-bf39-8c9e7209e788\") " pod="openshift-machine-config-operator/machine-config-server-5mxl2" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336294 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc9l9\" (UniqueName: \"kubernetes.io/projected/7973e4fa-99bd-46f3-bf39-8c9e7209e788-kube-api-access-zc9l9\") pod \"machine-config-server-5mxl2\" (UID: \"7973e4fa-99bd-46f3-bf39-8c9e7209e788\") " pod="openshift-machine-config-operator/machine-config-server-5mxl2" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336340 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e17e004d-fb45-4c4f-896f-6f650a0f7379-registry-tls\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336387 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca63e8a3-b015-4b94-95bc-5c3cdda81f88-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xcs68\" (UID: \"ca63e8a3-b015-4b94-95bc-5c3cdda81f88\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xcs68" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336436 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9fnj\" (UniqueName: \"kubernetes.io/projected/46a01ba7-7357-471a-ae59-95361f2ce7ba-kube-api-access-q9fnj\") pod \"router-default-5444994796-9pkgp\" (UID: \"46a01ba7-7357-471a-ae59-95361f2ce7ba\") " pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336469 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmfkx\" (UniqueName: \"kubernetes.io/projected/85096fe3-8ab7-45f9-8ae7-c36ff77a7333-kube-api-access-gmfkx\") pod \"etcd-operator-b45778765-89xb7\" (UID: \"85096fe3-8ab7-45f9-8ae7-c36ff77a7333\") " pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336535 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d864ad06-5a3e-4f38-a16a-22de2e50ce8c-config-volume\") pod \"dns-default-r8j24\" (UID: \"d864ad06-5a3e-4f38-a16a-22de2e50ce8c\") " pod="openshift-dns/dns-default-r8j24" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336558 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08695a3d-343d-4425-bae7-186f1dcb9a0d-config\") pod \"kube-apiserver-operator-766d6c64bb-jl5ts\" (UID: \"08695a3d-343d-4425-bae7-186f1dcb9a0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl5ts" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336630 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08695a3d-343d-4425-bae7-186f1dcb9a0d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jl5ts\" (UID: \"08695a3d-343d-4425-bae7-186f1dcb9a0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl5ts" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336653 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8cccdbda-6833-4c8f-b709-ab1f617e2153-csi-data-dir\") pod \"csi-hostpathplugin-b6r5v\" (UID: \"8cccdbda-6833-4c8f-b709-ab1f617e2153\") " pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336696 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f3783e9-776b-434b-8298-59283076969f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9cmnk\" (UID: \"8f3783e9-776b-434b-8298-59283076969f\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336722 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e17e004d-fb45-4c4f-896f-6f650a0f7379-bound-sa-token\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336747 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e17e004d-fb45-4c4f-896f-6f650a0f7379-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336775 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/46a01ba7-7357-471a-ae59-95361f2ce7ba-default-certificate\") pod \"router-default-5444994796-9pkgp\" (UID: \"46a01ba7-7357-471a-ae59-95361f2ce7ba\") " pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336797 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbca5bfe-41c8-403c-95e9-18e7854e6ed0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bj9c4\" (UID: \"cbca5bfe-41c8-403c-95e9-18e7854e6ed0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bj9c4" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336820 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbca5bfe-41c8-403c-95e9-18e7854e6ed0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bj9c4\" (UID: \"cbca5bfe-41c8-403c-95e9-18e7854e6ed0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bj9c4" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336845 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/516355b9-6e51-4a48-8583-0529c3f53013-tmpfs\") pod \"packageserver-d55dfcdfc-qbpjx\" (UID: \"516355b9-6e51-4a48-8583-0529c3f53013\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336878 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbca5bfe-41c8-403c-95e9-18e7854e6ed0-config\") pod \"kube-controller-manager-operator-78b949d7b-bj9c4\" (UID: \"cbca5bfe-41c8-403c-95e9-18e7854e6ed0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bj9c4" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.337010 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46a01ba7-7357-471a-ae59-95361f2ce7ba-service-ca-bundle\") pod \"router-default-5444994796-9pkgp\" (UID: \"46a01ba7-7357-471a-ae59-95361f2ce7ba\") " pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.337843 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f3783e9-776b-434b-8298-59283076969f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9cmnk\" (UID: \"8f3783e9-776b-434b-8298-59283076969f\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.338396 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85096fe3-8ab7-45f9-8ae7-c36ff77a7333-config\") pod \"etcd-operator-b45778765-89xb7\" (UID: \"85096fe3-8ab7-45f9-8ae7-c36ff77a7333\") " pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.336028 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/85096fe3-8ab7-45f9-8ae7-c36ff77a7333-etcd-ca\") pod \"etcd-operator-b45778765-89xb7\" (UID: \"85096fe3-8ab7-45f9-8ae7-c36ff77a7333\") " pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.338849 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pmxvg" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.339764 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e17e004d-fb45-4c4f-896f-6f650a0f7379-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.349099 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/85096fe3-8ab7-45f9-8ae7-c36ff77a7333-etcd-service-ca\") pod \"etcd-operator-b45778765-89xb7\" (UID: \"85096fe3-8ab7-45f9-8ae7-c36ff77a7333\") " pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.351110 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.354218 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46a01ba7-7357-471a-ae59-95361f2ce7ba-metrics-certs\") pod \"router-default-5444994796-9pkgp\" (UID: \"46a01ba7-7357-471a-ae59-95361f2ce7ba\") " pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.357053 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/85096fe3-8ab7-45f9-8ae7-c36ff77a7333-etcd-client\") pod \"etcd-operator-b45778765-89xb7\" (UID: \"85096fe3-8ab7-45f9-8ae7-c36ff77a7333\") " pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.357238 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e17e004d-fb45-4c4f-896f-6f650a0f7379-registry-tls\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.357638 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85096fe3-8ab7-45f9-8ae7-c36ff77a7333-serving-cert\") pod \"etcd-operator-b45778765-89xb7\" (UID: \"85096fe3-8ab7-45f9-8ae7-c36ff77a7333\") " pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.358277 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/46a01ba7-7357-471a-ae59-95361f2ce7ba-stats-auth\") pod \"router-default-5444994796-9pkgp\" (UID: \"46a01ba7-7357-471a-ae59-95361f2ce7ba\") " pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.358963 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e17e004d-fb45-4c4f-896f-6f650a0f7379-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.359274 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f3783e9-776b-434b-8298-59283076969f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9cmnk\" (UID: \"8f3783e9-776b-434b-8298-59283076969f\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.359441 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/46a01ba7-7357-471a-ae59-95361f2ce7ba-default-certificate\") pod \"router-default-5444994796-9pkgp\" (UID: \"46a01ba7-7357-471a-ae59-95361f2ce7ba\") " pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.368240 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwq8k\" (UniqueName: \"kubernetes.io/projected/e17e004d-fb45-4c4f-896f-6f650a0f7379-kube-api-access-wwq8k\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.417947 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9fnj\" (UniqueName: \"kubernetes.io/projected/46a01ba7-7357-471a-ae59-95361f2ce7ba-kube-api-access-q9fnj\") pod \"router-default-5444994796-9pkgp\" (UID: \"46a01ba7-7357-471a-ae59-95361f2ce7ba\") " pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.433215 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmfkx\" (UniqueName: \"kubernetes.io/projected/85096fe3-8ab7-45f9-8ae7-c36ff77a7333-kube-api-access-gmfkx\") pod \"etcd-operator-b45778765-89xb7\" (UID: \"85096fe3-8ab7-45f9-8ae7-c36ff77a7333\") " pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.437837 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:21 crc kubenswrapper[4844]: E0126 12:46:21.438108 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:21.938080403 +0000 UTC m=+158.871448015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446308 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-642s2\" (UniqueName: \"kubernetes.io/projected/11aed539-3a79-4f8a-bba3-e2839ccf0d41-kube-api-access-642s2\") pod \"ingress-canary-fnd9b\" (UID: \"11aed539-3a79-4f8a-bba3-e2839ccf0d41\") " pod="openshift-ingress-canary/ingress-canary-fnd9b" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446371 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt5l2\" (UniqueName: \"kubernetes.io/projected/96c710b8-69dd-49d7-8606-85bc4a4899ca-kube-api-access-mt5l2\") pod \"kube-storage-version-migrator-operator-b67b599dd-rtr85\" (UID: \"96c710b8-69dd-49d7-8606-85bc4a4899ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rtr85" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446413 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/10b7b789-0c46-4e84-875e-f74c68981bca-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qltc7\" (UID: \"10b7b789-0c46-4e84-875e-f74c68981bca\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qltc7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446442 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1176f79a-2455-49f3-b11a-faf502559c52-serving-cert\") pod \"service-ca-operator-777779d784-fl26p\" (UID: \"1176f79a-2455-49f3-b11a-faf502559c52\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl26p" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446475 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8cccdbda-6833-4c8f-b709-ab1f617e2153-plugins-dir\") pod \"csi-hostpathplugin-b6r5v\" (UID: \"8cccdbda-6833-4c8f-b709-ab1f617e2153\") " pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446495 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca63e8a3-b015-4b94-95bc-5c3cdda81f88-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xcs68\" (UID: \"ca63e8a3-b015-4b94-95bc-5c3cdda81f88\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xcs68" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446529 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1176f79a-2455-49f3-b11a-faf502559c52-config\") pod \"service-ca-operator-777779d784-fl26p\" (UID: \"1176f79a-2455-49f3-b11a-faf502559c52\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl26p" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446555 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/516355b9-6e51-4a48-8583-0529c3f53013-apiservice-cert\") pod \"packageserver-d55dfcdfc-qbpjx\" (UID: \"516355b9-6e51-4a48-8583-0529c3f53013\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446586 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08695a3d-343d-4425-bae7-186f1dcb9a0d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jl5ts\" (UID: \"08695a3d-343d-4425-bae7-186f1dcb9a0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl5ts" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446626 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca63e8a3-b015-4b94-95bc-5c3cdda81f88-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xcs68\" (UID: \"ca63e8a3-b015-4b94-95bc-5c3cdda81f88\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xcs68" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446648 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b95a697-eeb9-444d-83ed-3484a41f5dd1-secret-volume\") pod \"collect-profiles-29490525-mqbpl\" (UID: \"0b95a697-eeb9-444d-83ed-3484a41f5dd1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446669 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8cn2\" (UniqueName: \"kubernetes.io/projected/3f2d657c-0a0d-4671-a720-ef689ccf2120-kube-api-access-w8cn2\") pod \"migrator-59844c95c7-sgslp\" (UID: \"3f2d657c-0a0d-4671-a720-ef689ccf2120\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sgslp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446696 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8cccdbda-6833-4c8f-b709-ab1f617e2153-mountpoint-dir\") pod \"csi-hostpathplugin-b6r5v\" (UID: \"8cccdbda-6833-4c8f-b709-ab1f617e2153\") " pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446725 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7973e4fa-99bd-46f3-bf39-8c9e7209e788-node-bootstrap-token\") pod \"machine-config-server-5mxl2\" (UID: \"7973e4fa-99bd-46f3-bf39-8c9e7209e788\") " pod="openshift-machine-config-operator/machine-config-server-5mxl2" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446746 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc9l9\" (UniqueName: \"kubernetes.io/projected/7973e4fa-99bd-46f3-bf39-8c9e7209e788-kube-api-access-zc9l9\") pod \"machine-config-server-5mxl2\" (UID: \"7973e4fa-99bd-46f3-bf39-8c9e7209e788\") " pod="openshift-machine-config-operator/machine-config-server-5mxl2" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446778 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca63e8a3-b015-4b94-95bc-5c3cdda81f88-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xcs68\" (UID: \"ca63e8a3-b015-4b94-95bc-5c3cdda81f88\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xcs68" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446826 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d864ad06-5a3e-4f38-a16a-22de2e50ce8c-config-volume\") pod \"dns-default-r8j24\" (UID: \"d864ad06-5a3e-4f38-a16a-22de2e50ce8c\") " pod="openshift-dns/dns-default-r8j24" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446846 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08695a3d-343d-4425-bae7-186f1dcb9a0d-config\") pod \"kube-apiserver-operator-766d6c64bb-jl5ts\" (UID: \"08695a3d-343d-4425-bae7-186f1dcb9a0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl5ts" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446867 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8cccdbda-6833-4c8f-b709-ab1f617e2153-csi-data-dir\") pod \"csi-hostpathplugin-b6r5v\" (UID: \"8cccdbda-6833-4c8f-b709-ab1f617e2153\") " pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446888 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08695a3d-343d-4425-bae7-186f1dcb9a0d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jl5ts\" (UID: \"08695a3d-343d-4425-bae7-186f1dcb9a0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl5ts" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446917 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbca5bfe-41c8-403c-95e9-18e7854e6ed0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bj9c4\" (UID: \"cbca5bfe-41c8-403c-95e9-18e7854e6ed0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bj9c4" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446944 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbca5bfe-41c8-403c-95e9-18e7854e6ed0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bj9c4\" (UID: \"cbca5bfe-41c8-403c-95e9-18e7854e6ed0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bj9c4" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446968 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/516355b9-6e51-4a48-8583-0529c3f53013-tmpfs\") pod \"packageserver-d55dfcdfc-qbpjx\" (UID: \"516355b9-6e51-4a48-8583-0529c3f53013\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.446989 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbca5bfe-41c8-403c-95e9-18e7854e6ed0-config\") pod \"kube-controller-manager-operator-78b949d7b-bj9c4\" (UID: \"cbca5bfe-41c8-403c-95e9-18e7854e6ed0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bj9c4" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.447020 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7dsf\" (UniqueName: \"kubernetes.io/projected/516355b9-6e51-4a48-8583-0529c3f53013-kube-api-access-g7dsf\") pod \"packageserver-d55dfcdfc-qbpjx\" (UID: \"516355b9-6e51-4a48-8583-0529c3f53013\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.447046 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/516355b9-6e51-4a48-8583-0529c3f53013-webhook-cert\") pod \"packageserver-d55dfcdfc-qbpjx\" (UID: \"516355b9-6e51-4a48-8583-0529c3f53013\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.447074 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.447104 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d864ad06-5a3e-4f38-a16a-22de2e50ce8c-metrics-tls\") pod \"dns-default-r8j24\" (UID: \"d864ad06-5a3e-4f38-a16a-22de2e50ce8c\") " pod="openshift-dns/dns-default-r8j24" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.447135 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjkpt\" (UniqueName: \"kubernetes.io/projected/1176f79a-2455-49f3-b11a-faf502559c52-kube-api-access-cjkpt\") pod \"service-ca-operator-777779d784-fl26p\" (UID: \"1176f79a-2455-49f3-b11a-faf502559c52\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl26p" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.444747 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e17e004d-fb45-4c4f-896f-6f650a0f7379-bound-sa-token\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.452756 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ctks\" (UniqueName: \"kubernetes.io/projected/10b7b789-0c46-4e84-875e-f74c68981bca-kube-api-access-9ctks\") pod \"control-plane-machine-set-operator-78cbb6b69f-qltc7\" (UID: \"10b7b789-0c46-4e84-875e-f74c68981bca\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qltc7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.453794 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml5jt\" (UniqueName: \"kubernetes.io/projected/d864ad06-5a3e-4f38-a16a-22de2e50ce8c-kube-api-access-ml5jt\") pod \"dns-default-r8j24\" (UID: \"d864ad06-5a3e-4f38-a16a-22de2e50ce8c\") " pod="openshift-dns/dns-default-r8j24" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.453835 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c710b8-69dd-49d7-8606-85bc4a4899ca-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rtr85\" (UID: \"96c710b8-69dd-49d7-8606-85bc4a4899ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rtr85" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.453862 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8cccdbda-6833-4c8f-b709-ab1f617e2153-socket-dir\") pod \"csi-hostpathplugin-b6r5v\" (UID: \"8cccdbda-6833-4c8f-b709-ab1f617e2153\") " pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.453891 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7973e4fa-99bd-46f3-bf39-8c9e7209e788-certs\") pod \"machine-config-server-5mxl2\" (UID: \"7973e4fa-99bd-46f3-bf39-8c9e7209e788\") " pod="openshift-machine-config-operator/machine-config-server-5mxl2" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.453917 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96c710b8-69dd-49d7-8606-85bc4a4899ca-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rtr85\" (UID: \"96c710b8-69dd-49d7-8606-85bc4a4899ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rtr85" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.453954 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11aed539-3a79-4f8a-bba3-e2839ccf0d41-cert\") pod \"ingress-canary-fnd9b\" (UID: \"11aed539-3a79-4f8a-bba3-e2839ccf0d41\") " pod="openshift-ingress-canary/ingress-canary-fnd9b" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.453976 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b95a697-eeb9-444d-83ed-3484a41f5dd1-config-volume\") pod \"collect-profiles-29490525-mqbpl\" (UID: \"0b95a697-eeb9-444d-83ed-3484a41f5dd1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.453994 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k7dt\" (UniqueName: \"kubernetes.io/projected/0b95a697-eeb9-444d-83ed-3484a41f5dd1-kube-api-access-5k7dt\") pod \"collect-profiles-29490525-mqbpl\" (UID: \"0b95a697-eeb9-444d-83ed-3484a41f5dd1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.454036 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhqck\" (UniqueName: \"kubernetes.io/projected/8cccdbda-6833-4c8f-b709-ab1f617e2153-kube-api-access-vhqck\") pod \"csi-hostpathplugin-b6r5v\" (UID: \"8cccdbda-6833-4c8f-b709-ab1f617e2153\") " pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.454061 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8cccdbda-6833-4c8f-b709-ab1f617e2153-registration-dir\") pod \"csi-hostpathplugin-b6r5v\" (UID: \"8cccdbda-6833-4c8f-b709-ab1f617e2153\") " pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.454315 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8cccdbda-6833-4c8f-b709-ab1f617e2153-registration-dir\") pod \"csi-hostpathplugin-b6r5v\" (UID: \"8cccdbda-6833-4c8f-b709-ab1f617e2153\") " pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: E0126 12:46:21.454669 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:21.95465242 +0000 UTC m=+158.888020032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.455272 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca63e8a3-b015-4b94-95bc-5c3cdda81f88-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xcs68\" (UID: \"ca63e8a3-b015-4b94-95bc-5c3cdda81f88\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xcs68" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.460479 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b95a697-eeb9-444d-83ed-3484a41f5dd1-config-volume\") pod \"collect-profiles-29490525-mqbpl\" (UID: \"0b95a697-eeb9-444d-83ed-3484a41f5dd1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.460570 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c710b8-69dd-49d7-8606-85bc4a4899ca-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rtr85\" (UID: \"96c710b8-69dd-49d7-8606-85bc4a4899ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rtr85" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.461176 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1176f79a-2455-49f3-b11a-faf502559c52-config\") pod \"service-ca-operator-777779d784-fl26p\" (UID: \"1176f79a-2455-49f3-b11a-faf502559c52\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl26p" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.462461 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/10b7b789-0c46-4e84-875e-f74c68981bca-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qltc7\" (UID: \"10b7b789-0c46-4e84-875e-f74c68981bca\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qltc7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.462520 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8cccdbda-6833-4c8f-b709-ab1f617e2153-socket-dir\") pod \"csi-hostpathplugin-b6r5v\" (UID: \"8cccdbda-6833-4c8f-b709-ab1f617e2153\") " pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.462565 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8cccdbda-6833-4c8f-b709-ab1f617e2153-plugins-dir\") pod \"csi-hostpathplugin-b6r5v\" (UID: \"8cccdbda-6833-4c8f-b709-ab1f617e2153\") " pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.462861 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d864ad06-5a3e-4f38-a16a-22de2e50ce8c-config-volume\") pod \"dns-default-r8j24\" (UID: \"d864ad06-5a3e-4f38-a16a-22de2e50ce8c\") " pod="openshift-dns/dns-default-r8j24" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.462937 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8cccdbda-6833-4c8f-b709-ab1f617e2153-mountpoint-dir\") pod \"csi-hostpathplugin-b6r5v\" (UID: \"8cccdbda-6833-4c8f-b709-ab1f617e2153\") " pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.462952 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/516355b9-6e51-4a48-8583-0529c3f53013-tmpfs\") pod \"packageserver-d55dfcdfc-qbpjx\" (UID: \"516355b9-6e51-4a48-8583-0529c3f53013\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.466145 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8cccdbda-6833-4c8f-b709-ab1f617e2153-csi-data-dir\") pod \"csi-hostpathplugin-b6r5v\" (UID: \"8cccdbda-6833-4c8f-b709-ab1f617e2153\") " pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.466460 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08695a3d-343d-4425-bae7-186f1dcb9a0d-config\") pod \"kube-apiserver-operator-766d6c64bb-jl5ts\" (UID: \"08695a3d-343d-4425-bae7-186f1dcb9a0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl5ts" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.467741 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d864ad06-5a3e-4f38-a16a-22de2e50ce8c-metrics-tls\") pod \"dns-default-r8j24\" (UID: \"d864ad06-5a3e-4f38-a16a-22de2e50ce8c\") " pod="openshift-dns/dns-default-r8j24" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.470847 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbca5bfe-41c8-403c-95e9-18e7854e6ed0-config\") pod \"kube-controller-manager-operator-78b949d7b-bj9c4\" (UID: \"cbca5bfe-41c8-403c-95e9-18e7854e6ed0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bj9c4" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.474148 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b95a697-eeb9-444d-83ed-3484a41f5dd1-secret-volume\") pod \"collect-profiles-29490525-mqbpl\" (UID: \"0b95a697-eeb9-444d-83ed-3484a41f5dd1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.474486 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1176f79a-2455-49f3-b11a-faf502559c52-serving-cert\") pod \"service-ca-operator-777779d784-fl26p\" (UID: \"1176f79a-2455-49f3-b11a-faf502559c52\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl26p" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.474881 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/516355b9-6e51-4a48-8583-0529c3f53013-apiservice-cert\") pod \"packageserver-d55dfcdfc-qbpjx\" (UID: \"516355b9-6e51-4a48-8583-0529c3f53013\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.475338 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7973e4fa-99bd-46f3-bf39-8c9e7209e788-certs\") pod \"machine-config-server-5mxl2\" (UID: \"7973e4fa-99bd-46f3-bf39-8c9e7209e788\") " pod="openshift-machine-config-operator/machine-config-server-5mxl2" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.476924 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl4s4\" (UniqueName: \"kubernetes.io/projected/8f3783e9-776b-434b-8298-59283076969f-kube-api-access-tl4s4\") pod \"marketplace-operator-79b997595-9cmnk\" (UID: \"8f3783e9-776b-434b-8298-59283076969f\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.477896 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca63e8a3-b015-4b94-95bc-5c3cdda81f88-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xcs68\" (UID: \"ca63e8a3-b015-4b94-95bc-5c3cdda81f88\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xcs68" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.478173 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11aed539-3a79-4f8a-bba3-e2839ccf0d41-cert\") pod \"ingress-canary-fnd9b\" (UID: \"11aed539-3a79-4f8a-bba3-e2839ccf0d41\") " pod="openshift-ingress-canary/ingress-canary-fnd9b" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.478776 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96c710b8-69dd-49d7-8606-85bc4a4899ca-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rtr85\" (UID: \"96c710b8-69dd-49d7-8606-85bc4a4899ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rtr85" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.478996 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/516355b9-6e51-4a48-8583-0529c3f53013-webhook-cert\") pod \"packageserver-d55dfcdfc-qbpjx\" (UID: \"516355b9-6e51-4a48-8583-0529c3f53013\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.483375 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08695a3d-343d-4425-bae7-186f1dcb9a0d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jl5ts\" (UID: \"08695a3d-343d-4425-bae7-186f1dcb9a0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl5ts" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.484931 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbca5bfe-41c8-403c-95e9-18e7854e6ed0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bj9c4\" (UID: \"cbca5bfe-41c8-403c-95e9-18e7854e6ed0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bj9c4" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.491369 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7973e4fa-99bd-46f3-bf39-8c9e7209e788-node-bootstrap-token\") pod \"machine-config-server-5mxl2\" (UID: \"7973e4fa-99bd-46f3-bf39-8c9e7209e788\") " pod="openshift-machine-config-operator/machine-config-server-5mxl2" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.522557 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ctks\" (UniqueName: \"kubernetes.io/projected/10b7b789-0c46-4e84-875e-f74c68981bca-kube-api-access-9ctks\") pod \"control-plane-machine-set-operator-78cbb6b69f-qltc7\" (UID: \"10b7b789-0c46-4e84-875e-f74c68981bca\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qltc7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.531031 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc9l9\" (UniqueName: \"kubernetes.io/projected/7973e4fa-99bd-46f3-bf39-8c9e7209e788-kube-api-access-zc9l9\") pod \"machine-config-server-5mxl2\" (UID: \"7973e4fa-99bd-46f3-bf39-8c9e7209e788\") " pod="openshift-machine-config-operator/machine-config-server-5mxl2" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.532009 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbwgg"] Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.552039 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-642s2\" (UniqueName: \"kubernetes.io/projected/11aed539-3a79-4f8a-bba3-e2839ccf0d41-kube-api-access-642s2\") pod \"ingress-canary-fnd9b\" (UID: \"11aed539-3a79-4f8a-bba3-e2839ccf0d41\") " pod="openshift-ingress-canary/ingress-canary-fnd9b" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.554671 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:21 crc kubenswrapper[4844]: E0126 12:46:21.554983 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:22.054942624 +0000 UTC m=+158.988310236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.555085 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.581087 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt5l2\" (UniqueName: \"kubernetes.io/projected/96c710b8-69dd-49d7-8606-85bc4a4899ca-kube-api-access-mt5l2\") pod \"kube-storage-version-migrator-operator-b67b599dd-rtr85\" (UID: \"96c710b8-69dd-49d7-8606-85bc4a4899ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rtr85" Jan 26 12:46:21 crc kubenswrapper[4844]: E0126 12:46:21.588756 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:22.088725945 +0000 UTC m=+159.022093557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.602515 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fzvnx"] Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.604273 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k7dt\" (UniqueName: \"kubernetes.io/projected/0b95a697-eeb9-444d-83ed-3484a41f5dd1-kube-api-access-5k7dt\") pod \"collect-profiles-29490525-mqbpl\" (UID: \"0b95a697-eeb9-444d-83ed-3484a41f5dd1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.618708 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml5jt\" (UniqueName: \"kubernetes.io/projected/d864ad06-5a3e-4f38-a16a-22de2e50ce8c-kube-api-access-ml5jt\") pod \"dns-default-r8j24\" (UID: \"d864ad06-5a3e-4f38-a16a-22de2e50ce8c\") " pod="openshift-dns/dns-default-r8j24" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.619386 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjkpt\" (UniqueName: \"kubernetes.io/projected/1176f79a-2455-49f3-b11a-faf502559c52-kube-api-access-cjkpt\") pod \"service-ca-operator-777779d784-fl26p\" (UID: \"1176f79a-2455-49f3-b11a-faf502559c52\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl26p" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.625983 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.634532 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7fzwr"] Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.647446 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08695a3d-343d-4425-bae7-186f1dcb9a0d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jl5ts\" (UID: \"08695a3d-343d-4425-bae7-186f1dcb9a0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl5ts" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.655713 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-75rtp"] Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.657457 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca63e8a3-b015-4b94-95bc-5c3cdda81f88-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xcs68\" (UID: \"ca63e8a3-b015-4b94-95bc-5c3cdda81f88\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xcs68" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.677230 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhqck\" (UniqueName: \"kubernetes.io/projected/8cccdbda-6833-4c8f-b709-ab1f617e2153-kube-api-access-vhqck\") pod \"csi-hostpathplugin-b6r5v\" (UID: \"8cccdbda-6833-4c8f-b709-ab1f617e2153\") " pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.677892 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.680712 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:21 crc kubenswrapper[4844]: E0126 12:46:21.680889 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:22.180861643 +0000 UTC m=+159.114229255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.680988 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: E0126 12:46:21.681342 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:22.181328056 +0000 UTC m=+159.114695668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.692313 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.698811 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8cn2\" (UniqueName: \"kubernetes.io/projected/3f2d657c-0a0d-4671-a720-ef689ccf2120-kube-api-access-w8cn2\") pod \"migrator-59844c95c7-sgslp\" (UID: \"3f2d657c-0a0d-4671-a720-ef689ccf2120\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sgslp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.699028 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl26p" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.700188 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc"] Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.707152 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xcs68" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.713815 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl5ts" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.720688 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7dsf\" (UniqueName: \"kubernetes.io/projected/516355b9-6e51-4a48-8583-0529c3f53013-kube-api-access-g7dsf\") pod \"packageserver-d55dfcdfc-qbpjx\" (UID: \"516355b9-6e51-4a48-8583-0529c3f53013\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.723419 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rtr85" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.733426 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.741672 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbca5bfe-41c8-403c-95e9-18e7854e6ed0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bj9c4\" (UID: \"cbca5bfe-41c8-403c-95e9-18e7854e6ed0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bj9c4" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.741964 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sgslp" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.748730 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6zcv5"] Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.750386 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qltc7" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.750828 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rlnfh"] Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.753306 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4"] Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.757524 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bj9c4" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.765947 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fnd9b" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.783127 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.786474 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" Jan 26 12:46:21 crc kubenswrapper[4844]: E0126 12:46:21.786971 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:22.286953684 +0000 UTC m=+159.220321296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.787304 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8j2r"] Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.791520 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-r8j24" Jan 26 12:46:21 crc kubenswrapper[4844]: W0126 12:46:21.792048 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c91bd8e_040a_4961_8a7f_2fbeacff5b50.slice/crio-1c1c8efe7b37254a6d629022f4bc3601b3aac14ce0bea5b6e068cd0deb8700b1 WatchSource:0}: Error finding container 1c1c8efe7b37254a6d629022f4bc3601b3aac14ce0bea5b6e068cd0deb8700b1: Status 404 returned error can't find the container with id 1c1c8efe7b37254a6d629022f4bc3601b3aac14ce0bea5b6e068cd0deb8700b1 Jan 26 12:46:21 crc kubenswrapper[4844]: W0126 12:46:21.794224 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda537a695_5721_4eae_a5f7_6df14075f458.slice/crio-ec4cb5b7fbc4c3c2f4b6830ce427835c54ddba469c4fcb065980ec61210f3e9f WatchSource:0}: Error finding container ec4cb5b7fbc4c3c2f4b6830ce427835c54ddba469c4fcb065980ec61210f3e9f: Status 404 returned error can't find the container with id ec4cb5b7fbc4c3c2f4b6830ce427835c54ddba469c4fcb065980ec61210f3e9f Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.796443 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5mxl2" Jan 26 12:46:21 crc kubenswrapper[4844]: W0126 12:46:21.804833 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43fa0cde_7ba5_4788_be26_1170bf6ee75d.slice/crio-05c34b444a94171672f56e564a2d02f4277b71b97e9ce5c4f43b82c2466e0915 WatchSource:0}: Error finding container 05c34b444a94171672f56e564a2d02f4277b71b97e9ce5c4f43b82c2466e0915: Status 404 returned error can't find the container with id 05c34b444a94171672f56e564a2d02f4277b71b97e9ce5c4f43b82c2466e0915 Jan 26 12:46:21 crc kubenswrapper[4844]: W0126 12:46:21.819914 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb21e7f91_3226_493e_bbfb_89b33296e74e.slice/crio-b2bd760e1173b6b082e854155bf7ce95ab95e14d2be93f563790828532165ec6 WatchSource:0}: Error finding container b2bd760e1173b6b082e854155bf7ce95ab95e14d2be93f563790828532165ec6: Status 404 returned error can't find the container with id b2bd760e1173b6b082e854155bf7ce95ab95e14d2be93f563790828532165ec6 Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.849663 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vvlfw"] Jan 26 12:46:21 crc kubenswrapper[4844]: W0126 12:46:21.857423 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode278457d_db19_47bc_a2a5_6ff0e994aace.slice/crio-5a68c8fe06dc2eeb6c1fd3b89f6379a62062903c442033be06efc85827be214d WatchSource:0}: Error finding container 5a68c8fe06dc2eeb6c1fd3b89f6379a62062903c442033be06efc85827be214d: Status 404 returned error can't find the container with id 5a68c8fe06dc2eeb6c1fd3b89f6379a62062903c442033be06efc85827be214d Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.888103 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:21 crc kubenswrapper[4844]: E0126 12:46:21.888419 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:22.388408078 +0000 UTC m=+159.321775690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.962666 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6"] Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.986882 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5"] Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.989471 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:21 crc kubenswrapper[4844]: E0126 12:46:21.989858 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:22.489842101 +0000 UTC m=+159.423209713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.992048 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" Jan 26 12:46:21 crc kubenswrapper[4844]: I0126 12:46:21.993049 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5rkhb"] Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.034025 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pmxvg"] Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.089229 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" event={"ID":"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63","Type":"ContainerStarted","Data":"d04ed1d6ffdc3a4919245dc5be84ea3c2b9f3627f238b4cb92e786056562adeb"} Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.090148 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6zcv5" event={"ID":"f27f4e56-71ef-43e6-be78-20759a8e9ed5","Type":"ContainerStarted","Data":"c8930feafa60a6e02d28af8e00e8aa86ea71f59918e99cffa15bc1b13ace229a"} Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.090457 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:22 crc kubenswrapper[4844]: E0126 12:46:22.090874 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:22.590854033 +0000 UTC m=+159.524221635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.091160 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc" event={"ID":"94726f3c-782c-4f4c-89cc-60229b8f339a","Type":"ContainerStarted","Data":"ac7effb7071906c2c2643ddf8bee4c2125637bb4a0d50fd71d3181d8afaee0f6"} Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.092365 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-75rtp" event={"ID":"49ce2590-a0c6-4e75-af35-73bb211e6829","Type":"ContainerStarted","Data":"6915d9e0b918bfe8313392a3df5a357ac957db873dd1e8aba560631151befea1"} Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.093212 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb" event={"ID":"0c1c2a13-ee4c-4ced-9799-a1332e4e134f","Type":"ContainerStarted","Data":"8dbfdb3501e42e56a7b5bd7f5da1ffb48df18cdf49cf5c6ccd186b613ad1947b"} Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.095429 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbwgg" event={"ID":"4c91bd8e-040a-4961-8a7f-2fbeacff5b50","Type":"ContainerStarted","Data":"1c1c8efe7b37254a6d629022f4bc3601b3aac14ce0bea5b6e068cd0deb8700b1"} Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.097875 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vzrkt" event={"ID":"2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a","Type":"ContainerStarted","Data":"e194a2e6872dede2fb92a6e8f99bc7a1a7e3946c3c1b4c884061541555da7f8b"} Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.099089 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfwgn" event={"ID":"7129ebfc-8ee6-475d-81d7-dcc6a9d6a6e6","Type":"ContainerStarted","Data":"39cd56727937970512d19a1d0d38bdb5cab1db52ed49373b1176ffc00120b86f"} Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.100853 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" event={"ID":"b21e7f91-3226-493e-bbfb-89b33296e74e","Type":"ContainerStarted","Data":"b2bd760e1173b6b082e854155bf7ce95ab95e14d2be93f563790828532165ec6"} Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.104578 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" event={"ID":"e6a96cc6-703f-4104-8ff8-53c3cafb2227","Type":"ContainerStarted","Data":"98c3de53b099ad3e627ba372ff3ee134253fdc07605c69e3e2acc5ba4d5889c9"} Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.105649 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7fzwr" event={"ID":"43fa0cde-7ba5-4788-be26-1170bf6ee75d","Type":"ContainerStarted","Data":"05c34b444a94171672f56e564a2d02f4277b71b97e9ce5c4f43b82c2466e0915"} Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.107158 4844 generic.go:334] "Generic (PLEG): container finished" podID="7ec10c36-d3de-409c-a3d6-3cde63c0b206" containerID="2660012a3614c63d5e8ab134f8bf002ea031370adc32d60a6ad419c0b479bde1" exitCode=0 Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.107217 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj" event={"ID":"7ec10c36-d3de-409c-a3d6-3cde63c0b206","Type":"ContainerDied","Data":"2660012a3614c63d5e8ab134f8bf002ea031370adc32d60a6ad419c0b479bde1"} Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.109467 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zsn9c" event={"ID":"4fd9b862-74de-4579-9b30-b51e5cbd3b56","Type":"ContainerStarted","Data":"86e6e39becb52cb2f27f0008ca18b8845b439005e0872b547a4bfe9ee5b88fd3"} Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.112468 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" event={"ID":"038469c2-c803-45d5-aaa5-d81663f41345","Type":"ContainerStarted","Data":"93fa9e2486c01aefc53f5ac4433824d3da7c97d45abf01f1475dd98987cf869b"} Jan 26 12:46:22 crc kubenswrapper[4844]: W0126 12:46:22.113319 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b0b2321_3f0f_4889_acad_bb7b10f96043.slice/crio-cc69e3ab93df12d4cac0b4b1406446036e5e55bb960cde9f8218a047496f59f0 WatchSource:0}: Error finding container cc69e3ab93df12d4cac0b4b1406446036e5e55bb960cde9f8218a047496f59f0: Status 404 returned error can't find the container with id cc69e3ab93df12d4cac0b4b1406446036e5e55bb960cde9f8218a047496f59f0 Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.113767 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vhsn2" event={"ID":"8269d7d3-678d-44d5-885e-c5716e8024d8","Type":"ContainerStarted","Data":"518e032a28b7b5814efefee927465d7d479a8f18c62442e6d011f64c8a321648"} Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.114580 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" event={"ID":"a537a695-5721-4eae-a5f7-6df14075f458","Type":"ContainerStarted","Data":"ec4cb5b7fbc4c3c2f4b6830ce427835c54ddba469c4fcb065980ec61210f3e9f"} Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.115250 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8j2r" event={"ID":"e278457d-db19-47bc-a2a5-6ff0e994aace","Type":"ContainerStarted","Data":"5a68c8fe06dc2eeb6c1fd3b89f6379a62062903c442033be06efc85827be214d"} Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.120483 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4"] Jan 26 12:46:22 crc kubenswrapper[4844]: W0126 12:46:22.136158 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71551b91_3a04_4dcd_9a94_e96b4663b040.slice/crio-2dd48d3ec078f06d542503dd70537d51f9c9a4ccbdcac75924d2f3dd4ff9e183 WatchSource:0}: Error finding container 2dd48d3ec078f06d542503dd70537d51f9c9a4ccbdcac75924d2f3dd4ff9e183: Status 404 returned error can't find the container with id 2dd48d3ec078f06d542503dd70537d51f9c9a4ccbdcac75924d2f3dd4ff9e183 Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.191566 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:22 crc kubenswrapper[4844]: E0126 12:46:22.192079 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:22.692054591 +0000 UTC m=+159.625422203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.294773 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:22 crc kubenswrapper[4844]: E0126 12:46:22.296203 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:22.796173782 +0000 UTC m=+159.729541394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.378397 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qltc7"] Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.396828 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:22 crc kubenswrapper[4844]: E0126 12:46:22.397221 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:22.897190445 +0000 UTC m=+159.830558057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.397270 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:22 crc kubenswrapper[4844]: E0126 12:46:22.397881 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:22.897865711 +0000 UTC m=+159.831233323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.498180 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:22 crc kubenswrapper[4844]: E0126 12:46:22.498467 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:22.998453913 +0000 UTC m=+159.931821525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.600121 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:22 crc kubenswrapper[4844]: E0126 12:46:22.600469 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:23.100454991 +0000 UTC m=+160.033822613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.668029 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rtr85"] Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.669099 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sgslp"] Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.673632 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl5ts"] Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.676432 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fl26p"] Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.708382 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:22 crc kubenswrapper[4844]: E0126 12:46:22.708549 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:23.208523281 +0000 UTC m=+160.141890893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.708699 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:22 crc kubenswrapper[4844]: E0126 12:46:22.709262 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:23.20925061 +0000 UTC m=+160.142618222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.812103 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:22 crc kubenswrapper[4844]: E0126 12:46:22.812630 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:23.312615172 +0000 UTC m=+160.245982784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.815958 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9cmnk"] Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.830146 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-r8j24"] Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.843433 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-89xb7"] Jan 26 12:46:22 crc kubenswrapper[4844]: I0126 12:46:22.914105 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:22 crc kubenswrapper[4844]: E0126 12:46:22.914518 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:23.414506586 +0000 UTC m=+160.347874198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.015942 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:23 crc kubenswrapper[4844]: E0126 12:46:23.023746 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:23.523707824 +0000 UTC m=+160.457075436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.114801 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl"] Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.119026 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:23 crc kubenswrapper[4844]: E0126 12:46:23.119842 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:23.619824404 +0000 UTC m=+160.553192016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.133288 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-b6r5v"] Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.153182 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4" event={"ID":"3875ab05-c190-4557-a863-84b3c123fe26","Type":"ContainerStarted","Data":"5873078db5e0295d464e3d0982f4e94a50e08324c1d1fafc9bdd0eb58daf0743"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.184842 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp" event={"ID":"1aeb70f5-e543-4f51-bcf7-605df435f80e","Type":"ContainerStarted","Data":"11e239a8982c29dae4a46bc093158d7245da27cc30f8ab97831d17a53fefb335"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.201284 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" event={"ID":"8f3783e9-776b-434b-8298-59283076969f","Type":"ContainerStarted","Data":"26db9da30c759a3f9966e36157826bcf2a1d507e38193de2aff8e91eb4ab4089"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.205154 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fnd9b"] Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.206646 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sgslp" event={"ID":"3f2d657c-0a0d-4671-a720-ef689ccf2120","Type":"ContainerStarted","Data":"bf16d29ab24e86745b5ca6216fdff12a112703242e92d9577dcfae142e1e64c6"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.214683 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bj9c4"] Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.221007 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:23 crc kubenswrapper[4844]: E0126 12:46:23.221494 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:23.721475613 +0000 UTC m=+160.654843225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.225751 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xcs68"] Jan 26 12:46:23 crc kubenswrapper[4844]: W0126 12:46:23.237188 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b95a697_eeb9_444d_83ed_3484a41f5dd1.slice/crio-82235de4c874a39dd19ccd9cde8d593c7a4f516ec01cf5ad69779e2b1422f365 WatchSource:0}: Error finding container 82235de4c874a39dd19ccd9cde8d593c7a4f516ec01cf5ad69779e2b1422f365: Status 404 returned error can't find the container with id 82235de4c874a39dd19ccd9cde8d593c7a4f516ec01cf5ad69779e2b1422f365 Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.271275 4844 generic.go:334] "Generic (PLEG): container finished" podID="038469c2-c803-45d5-aaa5-d81663f41345" containerID="ef7afeb06089cf421e1139e1350e0f4dcaf60f918b0f92c9c08ee224e6c144ca" exitCode=0 Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.271464 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" event={"ID":"038469c2-c803-45d5-aaa5-d81663f41345","Type":"ContainerDied","Data":"ef7afeb06089cf421e1139e1350e0f4dcaf60f918b0f92c9c08ee224e6c144ca"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.288105 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5rkhb" event={"ID":"b428addf-b196-461c-aaaf-7b9b14848a6c","Type":"ContainerStarted","Data":"7c27323005b5b4abe2029d43759bde3359e2ae3cfa5ae688fa490164ccf5e54e"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.288163 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5rkhb" event={"ID":"b428addf-b196-461c-aaaf-7b9b14848a6c","Type":"ContainerStarted","Data":"e42e56270dfc1a315501cf8ff7365272bb256e38367177cc470432419e16ae74"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.289185 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5rkhb" Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.294938 4844 patch_prober.go:28] interesting pod/downloads-7954f5f757-5rkhb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.295041 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5rkhb" podUID="b428addf-b196-461c-aaaf-7b9b14848a6c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.302796 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vvlfw" event={"ID":"0b0b2321-3f0f-4889-acad-bb7b10f96043","Type":"ContainerStarted","Data":"cc69e3ab93df12d4cac0b4b1406446036e5e55bb960cde9f8218a047496f59f0"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.323251 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:23 crc kubenswrapper[4844]: E0126 12:46:23.325802 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:23.825789278 +0000 UTC m=+160.759156890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.331808 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-5rkhb" podStartSLOduration=136.331783549 podStartE2EDuration="2m16.331783549s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:23.326953838 +0000 UTC m=+160.260321450" watchObservedRunningTime="2026-01-26 12:46:23.331783549 +0000 UTC m=+160.265151161" Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.356584 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-vzrkt" Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.356644 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qltc7" event={"ID":"10b7b789-0c46-4e84-875e-f74c68981bca","Type":"ContainerStarted","Data":"0ac1f232e92e7fe169d6f7af7c071454f6d43b598fe5919bb7c9cf8811b5702a"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.356666 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vzrkt" event={"ID":"2c6b3ec3-b406-4c6f-bd8c-6f21caf1e94a","Type":"ContainerStarted","Data":"b43109287bd1a447fc0d47dc5c791c9c87f608ba33f94cf7b7ba3ae48ae4b3c5"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.356676 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl26p" event={"ID":"1176f79a-2455-49f3-b11a-faf502559c52","Type":"ContainerStarted","Data":"abcdd14fb1adb2bab161b4feb234790e5575924f91ba356efabf15bb6e956bd9"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.356687 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9pkgp" event={"ID":"46a01ba7-7357-471a-ae59-95361f2ce7ba","Type":"ContainerStarted","Data":"76c6124d63026f9b7c435485f4131ba78f4fd62a35273c7cb4a2730f95ec5dd7"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.366893 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zsn9c" event={"ID":"4fd9b862-74de-4579-9b30-b51e5cbd3b56","Type":"ContainerStarted","Data":"c370e9de59c6f6384f9128b7c10e6231284808c3e62294198b07fc41554a6eaa"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.368995 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl5ts" event={"ID":"08695a3d-343d-4425-bae7-186f1dcb9a0d","Type":"ContainerStarted","Data":"2c2071c4c2fecce6a2e4190ce4251baf40656ed878d2f3748db171f99c604246"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.370936 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5" event={"ID":"2e87ef7d-a670-47ae-8a85-cfc07a848430","Type":"ContainerStarted","Data":"012842cc05e546a70f76859468141b6344cfdfca74d843dbf83c35c77df06b3d"} Jan 26 12:46:23 crc kubenswrapper[4844]: W0126 12:46:23.371036 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbca5bfe_41c8_403c_95e9_18e7854e6ed0.slice/crio-5f7354069f07585adde01bb628a697264c19cc690466d286e74f10f3bf3761f2 WatchSource:0}: Error finding container 5f7354069f07585adde01bb628a697264c19cc690466d286e74f10f3bf3761f2: Status 404 returned error can't find the container with id 5f7354069f07585adde01bb628a697264c19cc690466d286e74f10f3bf3761f2 Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.371949 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pmxvg" event={"ID":"71551b91-3a04-4dcd-9a94-e96b4663b040","Type":"ContainerStarted","Data":"2dd48d3ec078f06d542503dd70537d51f9c9a4ccbdcac75924d2f3dd4ff9e183"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.373176 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5mxl2" event={"ID":"7973e4fa-99bd-46f3-bf39-8c9e7209e788","Type":"ContainerStarted","Data":"c1925e511e36e4f5be552fdddc44e4b4dd8d51e8f0b17e81e28544e729218371"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.377336 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-r8j24" event={"ID":"d864ad06-5a3e-4f38-a16a-22de2e50ce8c","Type":"ContainerStarted","Data":"30a4f79278104684a60811bbb82a19e5e6d96c931c22e4af85aceceacdf291f5"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.396875 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6" event={"ID":"0735aeec-55b6-4140-8c72-d11b656ddb07","Type":"ContainerStarted","Data":"0e748caa19cf393f0048a388535dc7d92357a9c223b7354e9bb0b735afb96c93"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.422465 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" event={"ID":"e6a96cc6-703f-4104-8ff8-53c3cafb2227","Type":"ContainerStarted","Data":"6ec5f0c11c305cb8ebe7ea97640489384b1218528df6e1ed3d79bb1aea4d78f0"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.439221 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.443847 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:23 crc kubenswrapper[4844]: E0126 12:46:23.444054 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:23.944015965 +0000 UTC m=+160.877383577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.445203 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:23 crc kubenswrapper[4844]: E0126 12:46:23.447620 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:23.947568144 +0000 UTC m=+160.880935756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.511063 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb" event={"ID":"0c1c2a13-ee4c-4ced-9799-a1332e4e134f","Type":"ContainerStarted","Data":"96389f3c676f54efaedeffd953357e172c5a2475cc255dd20c36235fd754543c"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.518760 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rtks2" event={"ID":"45322811-c744-4cce-a307-088c0bc3965a","Type":"ContainerStarted","Data":"00383d5d63f6170d840d2169fff0ac81d4c312afe79a5f4c4790e2c9b7b9c1eb"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.520174 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx"] Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.546013 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:23 crc kubenswrapper[4844]: E0126 12:46:23.546347 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:24.04632228 +0000 UTC m=+160.979689882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.554253 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfwgn" event={"ID":"7129ebfc-8ee6-475d-81d7-dcc6a9d6a6e6","Type":"ContainerStarted","Data":"e7a6b312897279746988f7ac242b47014be03607bb534ece6ac6927e534877ba"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.565574 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" event={"ID":"85096fe3-8ab7-45f9-8ae7-c36ff77a7333","Type":"ContainerStarted","Data":"fd6b3821ed8fbe6d5d58c331892b72d4c19de06840023adce1afcb8e189b772f"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.570923 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vhsn2" event={"ID":"8269d7d3-678d-44d5-885e-c5716e8024d8","Type":"ContainerStarted","Data":"8f48e391126a27fc17f87108e0926de0cadaeafebd85ae862b34a557400870de"} Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.572740 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rtr85" event={"ID":"96c710b8-69dd-49d7-8606-85bc4a4899ca","Type":"ContainerStarted","Data":"ace4856327eceda484405622e5c6bdac9959f87afce79486bb4ac08f6560bbfc"} Jan 26 12:46:23 crc kubenswrapper[4844]: W0126 12:46:23.592926 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod516355b9_6e51_4a48_8583_0529c3f53013.slice/crio-1aa46cbc54f8ac056ce3a14d35f3940a12d272ff06dd48d58fa58be29cddb2b0 WatchSource:0}: Error finding container 1aa46cbc54f8ac056ce3a14d35f3940a12d272ff06dd48d58fa58be29cddb2b0: Status 404 returned error can't find the container with id 1aa46cbc54f8ac056ce3a14d35f3940a12d272ff06dd48d58fa58be29cddb2b0 Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.647209 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:23 crc kubenswrapper[4844]: E0126 12:46:23.647531 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:24.147518987 +0000 UTC m=+161.080886589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.671762 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.699515 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-vzrkt" Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.749043 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:23 crc kubenswrapper[4844]: E0126 12:46:23.749686 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:24.249644818 +0000 UTC m=+161.183012430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.750104 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:23 crc kubenswrapper[4844]: E0126 12:46:23.751474 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:24.251454083 +0000 UTC m=+161.184821695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.778351 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-vhsn2" podStartSLOduration=136.778330499 podStartE2EDuration="2m16.778330499s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:23.77281904 +0000 UTC m=+160.706186672" watchObservedRunningTime="2026-01-26 12:46:23.778330499 +0000 UTC m=+160.711698111" Jan 26 12:46:23 crc kubenswrapper[4844]: E0126 12:46:23.851931 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:24.351906052 +0000 UTC m=+161.285273654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.857614 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.858097 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:23 crc kubenswrapper[4844]: E0126 12:46:23.858999 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:24.35897946 +0000 UTC m=+161.292347072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.860504 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" podStartSLOduration=136.860459517 podStartE2EDuration="2m16.860459517s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:23.855239765 +0000 UTC m=+160.788607397" watchObservedRunningTime="2026-01-26 12:46:23.860459517 +0000 UTC m=+160.793827119" Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.865072 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-vzrkt" podStartSLOduration=136.865056442 podStartE2EDuration="2m16.865056442s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:23.814191822 +0000 UTC m=+160.747559454" watchObservedRunningTime="2026-01-26 12:46:23.865056442 +0000 UTC m=+160.798424054" Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.959334 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:23 crc kubenswrapper[4844]: E0126 12:46:23.959534 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:24.45949363 +0000 UTC m=+161.392861242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:23 crc kubenswrapper[4844]: I0126 12:46:23.959965 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:23 crc kubenswrapper[4844]: E0126 12:46:23.960521 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:24.460514096 +0000 UTC m=+161.393881698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.062006 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:24 crc kubenswrapper[4844]: E0126 12:46:24.062306 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:24.562275227 +0000 UTC m=+161.495642839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.062370 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:24 crc kubenswrapper[4844]: E0126 12:46:24.062801 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:24.56279129 +0000 UTC m=+161.496159122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.163756 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:24 crc kubenswrapper[4844]: E0126 12:46:24.163935 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:24.663912505 +0000 UTC m=+161.597280117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.164293 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:24 crc kubenswrapper[4844]: E0126 12:46:24.164626 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:24.664614073 +0000 UTC m=+161.597981685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.264980 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:24 crc kubenswrapper[4844]: E0126 12:46:24.265156 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:24.765135523 +0000 UTC m=+161.698503145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.265568 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:24 crc kubenswrapper[4844]: E0126 12:46:24.265921 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:24.765908103 +0000 UTC m=+161.699275725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.366884 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:24 crc kubenswrapper[4844]: E0126 12:46:24.367301 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:24.867276174 +0000 UTC m=+161.800643806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.469297 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:24 crc kubenswrapper[4844]: E0126 12:46:24.469648 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:24.96963348 +0000 UTC m=+161.903001102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.573396 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:24 crc kubenswrapper[4844]: E0126 12:46:24.574522 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:25.07450228 +0000 UTC m=+162.007869892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.646812 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" event={"ID":"516355b9-6e51-4a48-8583-0529c3f53013","Type":"ContainerStarted","Data":"1aa46cbc54f8ac056ce3a14d35f3940a12d272ff06dd48d58fa58be29cddb2b0"} Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.682511 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:24 crc kubenswrapper[4844]: E0126 12:46:24.682824 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:25.182813146 +0000 UTC m=+162.116180748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.721725 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl" event={"ID":"0b95a697-eeb9-444d-83ed-3484a41f5dd1","Type":"ContainerStarted","Data":"82235de4c874a39dd19ccd9cde8d593c7a4f516ec01cf5ad69779e2b1422f365"} Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.728344 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" event={"ID":"b21e7f91-3226-493e-bbfb-89b33296e74e","Type":"ContainerStarted","Data":"3eaaa8d93d73a23ee10f80981fbfddf5bdeee6e89b8a5e1531d3379c4bd383a8"} Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.729327 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" event={"ID":"a537a695-5721-4eae-a5f7-6df14075f458","Type":"ContainerStarted","Data":"124b67318e24a14381d5ee4b0b9109a3f9f65e3e1fd2c9444396c6ebbd92cc0f"} Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.732651 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6zcv5" event={"ID":"f27f4e56-71ef-43e6-be78-20759a8e9ed5","Type":"ContainerStarted","Data":"49cab469cf8f986cb1ab6bcb9a6bdaa7689528c541186f13b49d134c3a32262a"} Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.734176 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" event={"ID":"8cccdbda-6833-4c8f-b709-ab1f617e2153","Type":"ContainerStarted","Data":"b75acde0f24b517f9bdbcf635d69d84292819a4ffe4ef44a53cf9c162267a634"} Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.738757 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fnd9b" event={"ID":"11aed539-3a79-4f8a-bba3-e2839ccf0d41","Type":"ContainerStarted","Data":"0bf3429fb60407115533f1031229e6af8f040aab9fa8e542c7c5acb9cc192dc7"} Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.747778 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xcs68" event={"ID":"ca63e8a3-b015-4b94-95bc-5c3cdda81f88","Type":"ContainerStarted","Data":"1b93de4a99074a106901890671492c9368de19049e2a47f5ac4c60ae814d1855"} Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.764967 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbwgg" event={"ID":"4c91bd8e-040a-4961-8a7f-2fbeacff5b50","Type":"ContainerStarted","Data":"2ea1cb84982bcfff6fe50ab00243c4e7a3784d7f7f788d27657ec145a8ed2ea1"} Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.765828 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-fzvnx" podStartSLOduration=137.765813636 podStartE2EDuration="2m17.765813636s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:24.764745499 +0000 UTC m=+161.698113111" watchObservedRunningTime="2026-01-26 12:46:24.765813636 +0000 UTC m=+161.699181248" Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.783974 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bj9c4" event={"ID":"cbca5bfe-41c8-403c-95e9-18e7854e6ed0","Type":"ContainerStarted","Data":"5f7354069f07585adde01bb628a697264c19cc690466d286e74f10f3bf3761f2"} Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.785081 4844 patch_prober.go:28] interesting pod/downloads-7954f5f757-5rkhb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.785141 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5rkhb" podUID="b428addf-b196-461c-aaaf-7b9b14848a6c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.785652 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:24 crc kubenswrapper[4844]: E0126 12:46:24.785831 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:25.285808149 +0000 UTC m=+162.219175761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.786026 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:24 crc kubenswrapper[4844]: E0126 12:46:24.789177 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:25.289162393 +0000 UTC m=+162.222529995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.790913 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbwgg" podStartSLOduration=137.790897088 podStartE2EDuration="2m17.790897088s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:24.788382014 +0000 UTC m=+161.721749626" watchObservedRunningTime="2026-01-26 12:46:24.790897088 +0000 UTC m=+161.724264710" Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.887211 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:24 crc kubenswrapper[4844]: E0126 12:46:24.887313 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:25.387298173 +0000 UTC m=+162.320665785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.889254 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:24 crc kubenswrapper[4844]: E0126 12:46:24.890138 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:25.390126465 +0000 UTC m=+162.323494087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.990626 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:24 crc kubenswrapper[4844]: E0126 12:46:24.990779 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:25.490744307 +0000 UTC m=+162.424111959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:24 crc kubenswrapper[4844]: I0126 12:46:24.990928 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:24 crc kubenswrapper[4844]: E0126 12:46:24.991219 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:25.49120856 +0000 UTC m=+162.424576172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.091936 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:25 crc kubenswrapper[4844]: E0126 12:46:25.092257 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:25.592242253 +0000 UTC m=+162.525609865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.193765 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:25 crc kubenswrapper[4844]: E0126 12:46:25.194054 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:25.694041745 +0000 UTC m=+162.627409357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.295189 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:25 crc kubenswrapper[4844]: E0126 12:46:25.295449 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:25.795414027 +0000 UTC m=+162.728781679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.295785 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:25 crc kubenswrapper[4844]: E0126 12:46:25.296317 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:25.796299169 +0000 UTC m=+162.729666811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.398146 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:25 crc kubenswrapper[4844]: E0126 12:46:25.398468 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:25.898423649 +0000 UTC m=+162.831791301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.398797 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:25 crc kubenswrapper[4844]: E0126 12:46:25.399213 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:25.899194789 +0000 UTC m=+162.832562401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.500379 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:25 crc kubenswrapper[4844]: E0126 12:46:25.500712 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:26.000673414 +0000 UTC m=+162.934041066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.500822 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:25 crc kubenswrapper[4844]: E0126 12:46:25.501314 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:26.001297839 +0000 UTC m=+162.934665481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.602323 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:25 crc kubenswrapper[4844]: E0126 12:46:25.602509 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:26.102482346 +0000 UTC m=+163.035849978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.602571 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:25 crc kubenswrapper[4844]: E0126 12:46:25.602988 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:26.102977799 +0000 UTC m=+163.036345411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.704278 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:25 crc kubenswrapper[4844]: E0126 12:46:25.704419 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:26.204402401 +0000 UTC m=+163.137770013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.704567 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:25 crc kubenswrapper[4844]: E0126 12:46:25.704840 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:26.204832752 +0000 UTC m=+163.138200354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.791247 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" event={"ID":"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63","Type":"ContainerStarted","Data":"7ac67dd3568804ad7677521b855982b1b7a3496504dbac50e11b95737c4cac8a"} Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.792638 4844 patch_prober.go:28] interesting pod/downloads-7954f5f757-5rkhb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.792724 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5rkhb" podUID="b428addf-b196-461c-aaaf-7b9b14848a6c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.806240 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:25 crc kubenswrapper[4844]: E0126 12:46:25.806630 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:26.306599254 +0000 UTC m=+163.239966876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.806847 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:25 crc kubenswrapper[4844]: E0126 12:46:25.807285 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:26.30727274 +0000 UTC m=+163.240640362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.908088 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:25 crc kubenswrapper[4844]: E0126 12:46:25.908328 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:26.408291274 +0000 UTC m=+163.341658926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:25 crc kubenswrapper[4844]: I0126 12:46:25.909271 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:25 crc kubenswrapper[4844]: E0126 12:46:25.910008 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:26.409976126 +0000 UTC m=+163.343343788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.010458 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:26 crc kubenswrapper[4844]: E0126 12:46:26.011060 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:26.51103757 +0000 UTC m=+163.444405202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.111944 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:26 crc kubenswrapper[4844]: E0126 12:46:26.112370 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:26.612349709 +0000 UTC m=+163.545717361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.212682 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:26 crc kubenswrapper[4844]: E0126 12:46:26.212886 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:26.71285608 +0000 UTC m=+163.646223692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.212924 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:26 crc kubenswrapper[4844]: E0126 12:46:26.213298 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:26.71328783 +0000 UTC m=+163.646655492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.313789 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:26 crc kubenswrapper[4844]: E0126 12:46:26.313933 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:26.813908983 +0000 UTC m=+163.747276595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.314017 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:26 crc kubenswrapper[4844]: E0126 12:46:26.314365 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:26.814354195 +0000 UTC m=+163.747721807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.415674 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:26 crc kubenswrapper[4844]: E0126 12:46:26.415866 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:26.915835369 +0000 UTC m=+163.849202981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.416547 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:26 crc kubenswrapper[4844]: E0126 12:46:26.417026 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:26.917005699 +0000 UTC m=+163.850373321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.517791 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:26 crc kubenswrapper[4844]: E0126 12:46:26.518089 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:27.018047982 +0000 UTC m=+163.951415594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.518529 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:26 crc kubenswrapper[4844]: E0126 12:46:26.519000 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:27.018977645 +0000 UTC m=+163.952345287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.619585 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:26 crc kubenswrapper[4844]: E0126 12:46:26.619931 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:27.119916046 +0000 UTC m=+164.053283658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.721093 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:26 crc kubenswrapper[4844]: E0126 12:46:26.722190 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:27.22216814 +0000 UTC m=+164.155535832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.795746 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zsn9c" event={"ID":"4fd9b862-74de-4579-9b30-b51e5cbd3b56","Type":"ContainerStarted","Data":"60413366018be3060fbd4dde913cabc3b02e8fd8038d3ce087ddb2aff1377f07"} Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.797162 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vvlfw" event={"ID":"0b0b2321-3f0f-4889-acad-bb7b10f96043","Type":"ContainerStarted","Data":"bc356f4ef88eea7bd2ec418ef79e2ef41eee5c278eaf2cb8916794498bd620d2"} Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.799631 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj" event={"ID":"7ec10c36-d3de-409c-a3d6-3cde63c0b206","Type":"ContainerStarted","Data":"87e24c05fb7e53415d04e4ae3c44af55aec14570ad419b2e04f6359722db48e6"} Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.801243 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6" event={"ID":"0735aeec-55b6-4140-8c72-d11b656ddb07","Type":"ContainerStarted","Data":"f9600f155eab62a711795537039c1b3a5bf6706dbe6be9b9c73417be6f90845e"} Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.802777 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9pkgp" event={"ID":"46a01ba7-7357-471a-ae59-95361f2ce7ba","Type":"ContainerStarted","Data":"33a91e2a05b6476d549d321b621a962c33b8f66024fd0682a7d1b91aec2844e9"} Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.804180 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7fzwr" event={"ID":"43fa0cde-7ba5-4788-be26-1170bf6ee75d","Type":"ContainerStarted","Data":"2f23b3a7b2466fa26f694d5fe42860e31a198c368d5d95388665ef47c66c642c"} Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.810067 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-75rtp" event={"ID":"49ce2590-a0c6-4e75-af35-73bb211e6829","Type":"ContainerStarted","Data":"95df860fdc1035e7a6ad9be111e4abddc8f3d419090bc5985650b56bda40db11"} Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.811818 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc" event={"ID":"94726f3c-782c-4f4c-89cc-60229b8f339a","Type":"ContainerStarted","Data":"e2978b90a16bcd772ae8425e94cda10957648ecc926e9ba3c4733b967c2ea6dc"} Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.813177 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4" event={"ID":"3875ab05-c190-4557-a863-84b3c123fe26","Type":"ContainerStarted","Data":"5d5508e2cfca90c3d2efb28188259b38b3a3429432abdeaa04a04cc9a4d15336"} Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.814514 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5" event={"ID":"2e87ef7d-a670-47ae-8a85-cfc07a848430","Type":"ContainerStarted","Data":"d86e3df2e558398ec7429e9bd69d2b4a779e942d8402f055580bc87b11383623"} Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.815857 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pmxvg" event={"ID":"71551b91-3a04-4dcd-9a94-e96b4663b040","Type":"ContainerStarted","Data":"81157434a75446a7f1ea1b7ed1ec0798b2213207af0333b018e1f82d6e891deb"} Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.823077 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:26 crc kubenswrapper[4844]: E0126 12:46:26.823640 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:27.323580162 +0000 UTC m=+164.256947774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.829204 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8j2r" event={"ID":"e278457d-db19-47bc-a2a5-6ff0e994aace","Type":"ContainerStarted","Data":"7ded1feec1be97991b5c71e1741ba5f15341411fab5fb4e6e2b17981190f1294"} Jan 26 12:46:26 crc kubenswrapper[4844]: I0126 12:46:26.925396 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:26 crc kubenswrapper[4844]: E0126 12:46:26.925881 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:27.425861077 +0000 UTC m=+164.359228689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.026672 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:27 crc kubenswrapper[4844]: E0126 12:46:27.027040 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:27.527019254 +0000 UTC m=+164.460386866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.127949 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:27 crc kubenswrapper[4844]: E0126 12:46:27.128438 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:27.628418426 +0000 UTC m=+164.561786058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.228584 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:27 crc kubenswrapper[4844]: E0126 12:46:27.228886 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:27.728863894 +0000 UTC m=+164.662231506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.229084 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:27 crc kubenswrapper[4844]: E0126 12:46:27.229441 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:27.729427378 +0000 UTC m=+164.662794990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.330070 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:27 crc kubenswrapper[4844]: E0126 12:46:27.330313 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:27.830281067 +0000 UTC m=+164.763648719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.330444 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:27 crc kubenswrapper[4844]: E0126 12:46:27.330922 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:27.830906753 +0000 UTC m=+164.764274395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.431265 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:27 crc kubenswrapper[4844]: E0126 12:46:27.431651 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:27.931634128 +0000 UTC m=+164.865001740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.533257 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:27 crc kubenswrapper[4844]: E0126 12:46:27.533643 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.033628375 +0000 UTC m=+164.966995987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.634960 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:27 crc kubenswrapper[4844]: E0126 12:46:27.635104 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.135083739 +0000 UTC m=+165.068451361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.635202 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:27 crc kubenswrapper[4844]: E0126 12:46:27.635585 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.135573571 +0000 UTC m=+165.068941183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.735985 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:27 crc kubenswrapper[4844]: E0126 12:46:27.736145 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.236119952 +0000 UTC m=+165.169487564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.736758 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:27 crc kubenswrapper[4844]: E0126 12:46:27.737114 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.237104067 +0000 UTC m=+165.170471679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.835203 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl5ts" event={"ID":"08695a3d-343d-4425-bae7-186f1dcb9a0d","Type":"ContainerStarted","Data":"470aabd96473d14ad73f0ec2921ea9ce9ec49ba1f059901ecae116bd6f44c9b3"} Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.837050 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfwgn" event={"ID":"7129ebfc-8ee6-475d-81d7-dcc6a9d6a6e6","Type":"ContainerStarted","Data":"4f9ab919c7c245dc4479d007222841e2d8446ee54014a9c7d6e794b151035258"} Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.837318 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:27 crc kubenswrapper[4844]: E0126 12:46:27.837512 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.337490734 +0000 UTC m=+165.270858346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.837549 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:27 crc kubenswrapper[4844]: E0126 12:46:27.837891 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.337875324 +0000 UTC m=+165.271242936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.838716 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-r8j24" event={"ID":"d864ad06-5a3e-4f38-a16a-22de2e50ce8c","Type":"ContainerStarted","Data":"0186e546b3d228b8009eabfa495516b1739474e2b492f3a26fba7e7d988c6cee"} Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.840572 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sgslp" event={"ID":"3f2d657c-0a0d-4671-a720-ef689ccf2120","Type":"ContainerStarted","Data":"67b3c3fc228a31fda0dbb648f816d6c3cb0532bfe22eebc580aa3692e4dcb60c"} Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.842278 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" event={"ID":"8f3783e9-776b-434b-8298-59283076969f","Type":"ContainerStarted","Data":"ce3f5d3b958e81b6a86db456f732174111485edf0d6c46d6c5bd56abad10844d"} Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.844120 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bj9c4" event={"ID":"cbca5bfe-41c8-403c-95e9-18e7854e6ed0","Type":"ContainerStarted","Data":"152e46e272965ba3d20a3b107fafa717d4f36fc1da3fb951daf2be2a8382ef3e"} Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.845490 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rtr85" event={"ID":"96c710b8-69dd-49d7-8606-85bc4a4899ca","Type":"ContainerStarted","Data":"caf97a9aca6b4413f587ebd82d5efb73fb97a003236f3435f74624b0f1d8a2eb"} Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.847050 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl26p" event={"ID":"1176f79a-2455-49f3-b11a-faf502559c52","Type":"ContainerStarted","Data":"39e77ddbdca778fbd6d417bb53937640b18719d4ba875220aa821bb5fb7109c0"} Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.848940 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp" event={"ID":"1aeb70f5-e543-4f51-bcf7-605df435f80e","Type":"ContainerStarted","Data":"f4be8d33186dc26431935b653848ea4d2c90caa0b651448d0b8ce7afc443d63c"} Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.850815 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb" event={"ID":"0c1c2a13-ee4c-4ced-9799-a1332e4e134f","Type":"ContainerStarted","Data":"0c544e3ad2da7f826af254f549c274bbe606195ba8ca3f7a58a9974b4c298b19"} Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.852782 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qltc7" event={"ID":"10b7b789-0c46-4e84-875e-f74c68981bca","Type":"ContainerStarted","Data":"d239c8f9aaa87990560682d645c1a1420abbb88a3d7f7f6c3ea9ac7e98d6e987"} Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.855542 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rtks2" event={"ID":"45322811-c744-4cce-a307-088c0bc3965a","Type":"ContainerStarted","Data":"2c2fefdc5702ce11545767bafe8af8ed1c20b7aa294e6354e27cd85e56fcc552"} Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.857021 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" event={"ID":"516355b9-6e51-4a48-8583-0529c3f53013","Type":"ContainerStarted","Data":"e9708749f655be5a907ec53ee26e98031c5723d3587fa224b5f3aae6eecff4b5"} Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.858390 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xcs68" event={"ID":"ca63e8a3-b015-4b94-95bc-5c3cdda81f88","Type":"ContainerStarted","Data":"6cdaa78cfe7b0a12d0c48d4a351d6ab49fa9bc09dd1b92374e028daaf4925146"} Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.860459 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" event={"ID":"038469c2-c803-45d5-aaa5-d81663f41345","Type":"ContainerStarted","Data":"606c053ca5c9331598c1208690e94ca78f3b3146ccaf15f49928fb4112bead5e"} Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.861979 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5mxl2" event={"ID":"7973e4fa-99bd-46f3-bf39-8c9e7209e788","Type":"ContainerStarted","Data":"f314437b84f83413e215aa471eb0bed0d8f62ef758b653c00373f85416672dd5"} Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.863494 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl" event={"ID":"0b95a697-eeb9-444d-83ed-3484a41f5dd1","Type":"ContainerStarted","Data":"174c56e0839b5e5dce7465d4fb7c8f05272878d2f83732f894eaf8713e0f80db"} Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.865107 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fnd9b" event={"ID":"11aed539-3a79-4f8a-bba3-e2839ccf0d41","Type":"ContainerStarted","Data":"0b56bc2789a3f4a8b1a96e84c9d61bcf103f6d94cb4e09f02b4850f20a11a54d"} Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.865464 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.872938 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.889350 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" podStartSLOduration=139.889330319 podStartE2EDuration="2m19.889330319s" podCreationTimestamp="2026-01-26 12:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:27.888282183 +0000 UTC m=+164.821649815" watchObservedRunningTime="2026-01-26 12:46:27.889330319 +0000 UTC m=+164.822697931" Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.938517 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:27 crc kubenswrapper[4844]: E0126 12:46:27.938738 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.438709312 +0000 UTC m=+165.372076924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.938856 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:27 crc kubenswrapper[4844]: E0126 12:46:27.940959 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.439258075 +0000 UTC m=+165.372625687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:27 crc kubenswrapper[4844]: I0126 12:46:27.988665 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.039661 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.039859 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.539832267 +0000 UTC m=+165.473199879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.040118 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.040635 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.540618037 +0000 UTC m=+165.473985649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.141643 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.141837 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.641812194 +0000 UTC m=+165.575179806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.141985 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.142291 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.642283576 +0000 UTC m=+165.575651188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.243151 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.243370 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.743336159 +0000 UTC m=+165.676703771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.243650 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.243957 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.743945655 +0000 UTC m=+165.677313257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.345119 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.345269 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.845245045 +0000 UTC m=+165.778612657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.345318 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.345624 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.845612664 +0000 UTC m=+165.778980276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.446358 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.446537 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.946512343 +0000 UTC m=+165.879879955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.446747 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.447010 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:28.946998406 +0000 UTC m=+165.880366018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.547512 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.547709 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:29.047684321 +0000 UTC m=+165.981051933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.547928 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.548274 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:29.048264615 +0000 UTC m=+165.981632227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.651638 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.651849 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:29.151821711 +0000 UTC m=+166.085189323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.651922 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.652281 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:29.152273923 +0000 UTC m=+166.085641535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.753188 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.753388 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:29.253361018 +0000 UTC m=+166.186728630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.753555 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.753990 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:29.253972713 +0000 UTC m=+166.187340325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.854535 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.854696 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:29.354671158 +0000 UTC m=+166.288038770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.854863 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.855149 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:29.355138679 +0000 UTC m=+166.288506291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:28 crc kubenswrapper[4844]: I0126 12:46:28.955655 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:28 crc kubenswrapper[4844]: E0126 12:46:28.956576 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:29.456561883 +0000 UTC m=+166.389929495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.092569 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:29 crc kubenswrapper[4844]: E0126 12:46:29.093033 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:29.593021697 +0000 UTC m=+166.526389309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.194163 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:29 crc kubenswrapper[4844]: E0126 12:46:29.194317 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:29.694295627 +0000 UTC m=+166.627663249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.194397 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:29 crc kubenswrapper[4844]: E0126 12:46:29.194706 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:29.694698517 +0000 UTC m=+166.628066129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.295690 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:29 crc kubenswrapper[4844]: E0126 12:46:29.295893 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:29.795866423 +0000 UTC m=+166.729234025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.296065 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:29 crc kubenswrapper[4844]: E0126 12:46:29.296375 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:29.796363145 +0000 UTC m=+166.729730757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.397067 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:29 crc kubenswrapper[4844]: E0126 12:46:29.397259 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:29.897234024 +0000 UTC m=+166.830601636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.397329 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:29 crc kubenswrapper[4844]: E0126 12:46:29.397633 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:29.897621645 +0000 UTC m=+166.830989257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.498797 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:29 crc kubenswrapper[4844]: E0126 12:46:29.499009 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:29.998976106 +0000 UTC m=+166.932343718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.499623 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:29 crc kubenswrapper[4844]: E0126 12:46:29.499949 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:29.999941761 +0000 UTC m=+166.933309373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.600570 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:29 crc kubenswrapper[4844]: E0126 12:46:29.600948 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:30.100924162 +0000 UTC m=+167.034291774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.601027 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:29 crc kubenswrapper[4844]: E0126 12:46:29.601393 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:30.101377684 +0000 UTC m=+167.034745296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.703380 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:29 crc kubenswrapper[4844]: E0126 12:46:29.703841 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:30.203817292 +0000 UTC m=+167.137184894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.725747 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-982kx"] Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.726980 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-982kx" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.728847 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.741296 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-982kx"] Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.805309 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.805347 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70-utilities\") pod \"certified-operators-982kx\" (UID: \"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70\") " pod="openshift-marketplace/certified-operators-982kx" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.805369 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70-catalog-content\") pod \"certified-operators-982kx\" (UID: \"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70\") " pod="openshift-marketplace/certified-operators-982kx" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.805393 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqvbp\" (UniqueName: \"kubernetes.io/projected/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70-kube-api-access-cqvbp\") pod \"certified-operators-982kx\" (UID: \"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70\") " pod="openshift-marketplace/certified-operators-982kx" Jan 26 12:46:29 crc kubenswrapper[4844]: E0126 12:46:29.805717 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:30.305700867 +0000 UTC m=+167.239068479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.814637 4844 csr.go:261] certificate signing request csr-vbpbf is approved, waiting to be issued Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.831604 4844 csr.go:257] certificate signing request csr-vbpbf is issued Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.875310 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" event={"ID":"8cccdbda-6833-4c8f-b709-ab1f617e2153","Type":"ContainerStarted","Data":"f97d910db5aafd67b77d795c4a7aa825fc5f714c2a2339716a1d92c56dc3a1c9"} Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.879030 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" event={"ID":"85096fe3-8ab7-45f9-8ae7-c36ff77a7333","Type":"ContainerStarted","Data":"efa4d4d96b1a2fe23e69d2cf599874cff4bc2c2ce1f58ffcb7bdeb368968009a"} Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.880047 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.880081 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.892005 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.896112 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.905957 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:29 crc kubenswrapper[4844]: E0126 12:46:29.906151 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:30.406124865 +0000 UTC m=+167.339492477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.906241 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.906263 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70-utilities\") pod \"certified-operators-982kx\" (UID: \"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70\") " pod="openshift-marketplace/certified-operators-982kx" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.906282 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70-catalog-content\") pod \"certified-operators-982kx\" (UID: \"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70\") " pod="openshift-marketplace/certified-operators-982kx" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.906303 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqvbp\" (UniqueName: \"kubernetes.io/projected/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70-kube-api-access-cqvbp\") pod \"certified-operators-982kx\" (UID: \"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70\") " pod="openshift-marketplace/certified-operators-982kx" Jan 26 12:46:29 crc kubenswrapper[4844]: E0126 12:46:29.906815 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:30.406805072 +0000 UTC m=+167.340172674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.907024 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70-catalog-content\") pod \"certified-operators-982kx\" (UID: \"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70\") " pod="openshift-marketplace/certified-operators-982kx" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.907310 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70-utilities\") pod \"certified-operators-982kx\" (UID: \"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70\") " pod="openshift-marketplace/certified-operators-982kx" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.922608 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g8j2r" podStartSLOduration=142.92257921 podStartE2EDuration="2m22.92257921s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:29.90234415 +0000 UTC m=+166.835711762" watchObservedRunningTime="2026-01-26 12:46:29.92257921 +0000 UTC m=+166.855946822" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.924243 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c8rpj" podStartSLOduration=142.924236891 podStartE2EDuration="2m22.924236891s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:29.920911408 +0000 UTC m=+166.854279020" watchObservedRunningTime="2026-01-26 12:46:29.924236891 +0000 UTC m=+166.857604503" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.934492 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lhjls"] Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.935538 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhjls" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.939669 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.968240 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqvbp\" (UniqueName: \"kubernetes.io/projected/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70-kube-api-access-cqvbp\") pod \"certified-operators-982kx\" (UID: \"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70\") " pod="openshift-marketplace/certified-operators-982kx" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.983105 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-9pkgp" podStartSLOduration=142.983083812 podStartE2EDuration="2m22.983083812s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:29.980163529 +0000 UTC m=+166.913531151" watchObservedRunningTime="2026-01-26 12:46:29.983083812 +0000 UTC m=+166.916451424" Jan 26 12:46:29 crc kubenswrapper[4844]: I0126 12:46:29.989581 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lhjls"] Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.007694 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:30 crc kubenswrapper[4844]: E0126 12:46:30.011328 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:30.511302302 +0000 UTC m=+167.444669924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.040225 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-982kx" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.071302 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-vvlfw" podStartSLOduration=142.071285692 podStartE2EDuration="2m22.071285692s" podCreationTimestamp="2026-01-26 12:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:30.069979039 +0000 UTC m=+167.003346651" watchObservedRunningTime="2026-01-26 12:46:30.071285692 +0000 UTC m=+167.004653304" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.071467 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fs4g6" podStartSLOduration=143.071462067 podStartE2EDuration="2m23.071462067s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:30.015417436 +0000 UTC m=+166.948785068" watchObservedRunningTime="2026-01-26 12:46:30.071462067 +0000 UTC m=+167.004829679" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.117797 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hljrl\" (UniqueName: \"kubernetes.io/projected/a37a9c59-7c20-4326-b280-9dbd2d633e0b-kube-api-access-hljrl\") pod \"community-operators-lhjls\" (UID: \"a37a9c59-7c20-4326-b280-9dbd2d633e0b\") " pod="openshift-marketplace/community-operators-lhjls" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.117889 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a37a9c59-7c20-4326-b280-9dbd2d633e0b-catalog-content\") pod \"community-operators-lhjls\" (UID: \"a37a9c59-7c20-4326-b280-9dbd2d633e0b\") " pod="openshift-marketplace/community-operators-lhjls" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.117917 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a37a9c59-7c20-4326-b280-9dbd2d633e0b-utilities\") pod \"community-operators-lhjls\" (UID: \"a37a9c59-7c20-4326-b280-9dbd2d633e0b\") " pod="openshift-marketplace/community-operators-lhjls" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.117947 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:30 crc kubenswrapper[4844]: E0126 12:46:30.118283 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:30.618270815 +0000 UTC m=+167.551638417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.130407 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" podStartSLOduration=143.13037992 podStartE2EDuration="2m23.13037992s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:30.116562192 +0000 UTC m=+167.049929804" watchObservedRunningTime="2026-01-26 12:46:30.13037992 +0000 UTC m=+167.063747532" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.143278 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8hnm5"] Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.144585 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8hnm5" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.151915 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8hnm5"] Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.219072 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.219290 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hljrl\" (UniqueName: \"kubernetes.io/projected/a37a9c59-7c20-4326-b280-9dbd2d633e0b-kube-api-access-hljrl\") pod \"community-operators-lhjls\" (UID: \"a37a9c59-7c20-4326-b280-9dbd2d633e0b\") " pod="openshift-marketplace/community-operators-lhjls" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.219358 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a37a9c59-7c20-4326-b280-9dbd2d633e0b-catalog-content\") pod \"community-operators-lhjls\" (UID: \"a37a9c59-7c20-4326-b280-9dbd2d633e0b\") " pod="openshift-marketplace/community-operators-lhjls" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.219383 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a37a9c59-7c20-4326-b280-9dbd2d633e0b-utilities\") pod \"community-operators-lhjls\" (UID: \"a37a9c59-7c20-4326-b280-9dbd2d633e0b\") " pod="openshift-marketplace/community-operators-lhjls" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.219824 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a37a9c59-7c20-4326-b280-9dbd2d633e0b-utilities\") pod \"community-operators-lhjls\" (UID: \"a37a9c59-7c20-4326-b280-9dbd2d633e0b\") " pod="openshift-marketplace/community-operators-lhjls" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.220027 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a37a9c59-7c20-4326-b280-9dbd2d633e0b-catalog-content\") pod \"community-operators-lhjls\" (UID: \"a37a9c59-7c20-4326-b280-9dbd2d633e0b\") " pod="openshift-marketplace/community-operators-lhjls" Jan 26 12:46:30 crc kubenswrapper[4844]: E0126 12:46:30.220089 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:30.720073198 +0000 UTC m=+167.653440810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.244962 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hljrl\" (UniqueName: \"kubernetes.io/projected/a37a9c59-7c20-4326-b280-9dbd2d633e0b-kube-api-access-hljrl\") pod \"community-operators-lhjls\" (UID: \"a37a9c59-7c20-4326-b280-9dbd2d633e0b\") " pod="openshift-marketplace/community-operators-lhjls" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.253605 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhjls" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.315079 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bnlhz"] Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.316231 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnlhz" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.320049 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.320304 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f204088-0679-4c31-bd2b-848fc4f93b21-utilities\") pod \"certified-operators-8hnm5\" (UID: \"1f204088-0679-4c31-bd2b-848fc4f93b21\") " pod="openshift-marketplace/certified-operators-8hnm5" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.320327 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs\") pod \"network-metrics-daemon-gxnj7\" (UID: \"c69496f6-7f67-4cca-9c9f-420e5567b165\") " pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.320371 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5km9\" (UniqueName: \"kubernetes.io/projected/1f204088-0679-4c31-bd2b-848fc4f93b21-kube-api-access-n5km9\") pod \"certified-operators-8hnm5\" (UID: \"1f204088-0679-4c31-bd2b-848fc4f93b21\") " pod="openshift-marketplace/certified-operators-8hnm5" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.320389 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f204088-0679-4c31-bd2b-848fc4f93b21-catalog-content\") pod \"certified-operators-8hnm5\" (UID: \"1f204088-0679-4c31-bd2b-848fc4f93b21\") " pod="openshift-marketplace/certified-operators-8hnm5" Jan 26 12:46:30 crc kubenswrapper[4844]: E0126 12:46:30.320498 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:30.820480955 +0000 UTC m=+167.753848567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.331346 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69496f6-7f67-4cca-9c9f-420e5567b165-metrics-certs\") pod \"network-metrics-daemon-gxnj7\" (UID: \"c69496f6-7f67-4cca-9c9f-420e5567b165\") " pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.331716 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bnlhz"] Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.347052 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-982kx"] Jan 26 12:46:30 crc kubenswrapper[4844]: W0126 12:46:30.355094 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b7b1cea_f94c_4750_8db8_18d9b7f9fb70.slice/crio-a72111891bff6b030c2f006af8dbcdb3dc93eeb5366108178016d8d726c69735 WatchSource:0}: Error finding container a72111891bff6b030c2f006af8dbcdb3dc93eeb5366108178016d8d726c69735: Status 404 returned error can't find the container with id a72111891bff6b030c2f006af8dbcdb3dc93eeb5366108178016d8d726c69735 Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.356086 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxnj7" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.421188 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.421522 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ddfeacb-de87-47d6-913e-6c2333a7df93-utilities\") pod \"community-operators-bnlhz\" (UID: \"8ddfeacb-de87-47d6-913e-6c2333a7df93\") " pod="openshift-marketplace/community-operators-bnlhz" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.421550 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f204088-0679-4c31-bd2b-848fc4f93b21-utilities\") pod \"certified-operators-8hnm5\" (UID: \"1f204088-0679-4c31-bd2b-848fc4f93b21\") " pod="openshift-marketplace/certified-operators-8hnm5" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.421578 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66f2t\" (UniqueName: \"kubernetes.io/projected/8ddfeacb-de87-47d6-913e-6c2333a7df93-kube-api-access-66f2t\") pod \"community-operators-bnlhz\" (UID: \"8ddfeacb-de87-47d6-913e-6c2333a7df93\") " pod="openshift-marketplace/community-operators-bnlhz" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.421641 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ddfeacb-de87-47d6-913e-6c2333a7df93-catalog-content\") pod \"community-operators-bnlhz\" (UID: \"8ddfeacb-de87-47d6-913e-6c2333a7df93\") " pod="openshift-marketplace/community-operators-bnlhz" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.421711 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5km9\" (UniqueName: \"kubernetes.io/projected/1f204088-0679-4c31-bd2b-848fc4f93b21-kube-api-access-n5km9\") pod \"certified-operators-8hnm5\" (UID: \"1f204088-0679-4c31-bd2b-848fc4f93b21\") " pod="openshift-marketplace/certified-operators-8hnm5" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.421730 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f204088-0679-4c31-bd2b-848fc4f93b21-catalog-content\") pod \"certified-operators-8hnm5\" (UID: \"1f204088-0679-4c31-bd2b-848fc4f93b21\") " pod="openshift-marketplace/certified-operators-8hnm5" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.422275 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f204088-0679-4c31-bd2b-848fc4f93b21-catalog-content\") pod \"certified-operators-8hnm5\" (UID: \"1f204088-0679-4c31-bd2b-848fc4f93b21\") " pod="openshift-marketplace/certified-operators-8hnm5" Jan 26 12:46:30 crc kubenswrapper[4844]: E0126 12:46:30.422365 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:30.922348759 +0000 UTC m=+167.855716371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.422648 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f204088-0679-4c31-bd2b-848fc4f93b21-utilities\") pod \"certified-operators-8hnm5\" (UID: \"1f204088-0679-4c31-bd2b-848fc4f93b21\") " pod="openshift-marketplace/certified-operators-8hnm5" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.440716 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5km9\" (UniqueName: \"kubernetes.io/projected/1f204088-0679-4c31-bd2b-848fc4f93b21-kube-api-access-n5km9\") pod \"certified-operators-8hnm5\" (UID: \"1f204088-0679-4c31-bd2b-848fc4f93b21\") " pod="openshift-marketplace/certified-operators-8hnm5" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.451831 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lhjls"] Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.477990 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8hnm5" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.524019 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ddfeacb-de87-47d6-913e-6c2333a7df93-catalog-content\") pod \"community-operators-bnlhz\" (UID: \"8ddfeacb-de87-47d6-913e-6c2333a7df93\") " pod="openshift-marketplace/community-operators-bnlhz" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.524116 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.524289 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ddfeacb-de87-47d6-913e-6c2333a7df93-utilities\") pod \"community-operators-bnlhz\" (UID: \"8ddfeacb-de87-47d6-913e-6c2333a7df93\") " pod="openshift-marketplace/community-operators-bnlhz" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.524323 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66f2t\" (UniqueName: \"kubernetes.io/projected/8ddfeacb-de87-47d6-913e-6c2333a7df93-kube-api-access-66f2t\") pod \"community-operators-bnlhz\" (UID: \"8ddfeacb-de87-47d6-913e-6c2333a7df93\") " pod="openshift-marketplace/community-operators-bnlhz" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.524516 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ddfeacb-de87-47d6-913e-6c2333a7df93-catalog-content\") pod \"community-operators-bnlhz\" (UID: \"8ddfeacb-de87-47d6-913e-6c2333a7df93\") " pod="openshift-marketplace/community-operators-bnlhz" Jan 26 12:46:30 crc kubenswrapper[4844]: E0126 12:46:30.524559 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:31.024542682 +0000 UTC m=+167.957910374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.524779 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ddfeacb-de87-47d6-913e-6c2333a7df93-utilities\") pod \"community-operators-bnlhz\" (UID: \"8ddfeacb-de87-47d6-913e-6c2333a7df93\") " pod="openshift-marketplace/community-operators-bnlhz" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.545420 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66f2t\" (UniqueName: \"kubernetes.io/projected/8ddfeacb-de87-47d6-913e-6c2333a7df93-kube-api-access-66f2t\") pod \"community-operators-bnlhz\" (UID: \"8ddfeacb-de87-47d6-913e-6c2333a7df93\") " pod="openshift-marketplace/community-operators-bnlhz" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.628158 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:30 crc kubenswrapper[4844]: E0126 12:46:30.628854 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:31.128835796 +0000 UTC m=+168.062203408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.642938 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnlhz" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.680261 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.690734 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:46:30 crc kubenswrapper[4844]: [-]has-synced failed: reason withheld Jan 26 12:46:30 crc kubenswrapper[4844]: [+]process-running ok Jan 26 12:46:30 crc kubenswrapper[4844]: healthz check failed Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.690779 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.693990 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gxnj7"] Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.705715 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.705741 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.718417 4844 patch_prober.go:28] interesting pod/console-f9d7485db-vhsn2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.718477 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vhsn2" podUID="8269d7d3-678d-44d5-885e-c5716e8024d8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.730426 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:30 crc kubenswrapper[4844]: E0126 12:46:30.730753 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:31.230740472 +0000 UTC m=+168.164108084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.805952 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8hnm5"] Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.833045 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:30 crc kubenswrapper[4844]: E0126 12:46:30.834023 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:31.334005791 +0000 UTC m=+168.267373393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.834547 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 12:41:29 +0000 UTC, rotation deadline is 2026-10-26 23:16:10.737574721 +0000 UTC Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.834559 4844 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6562h29m39.903017286s for next certificate rotation Jan 26 12:46:30 crc kubenswrapper[4844]: W0126 12:46:30.842878 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f204088_0679_4c31_bd2b_848fc4f93b21.slice/crio-d395d3d35cc088d8e873ff86740bf1a3437c13a60d19822d0a54c7d6e63d35c8 WatchSource:0}: Error finding container d395d3d35cc088d8e873ff86740bf1a3437c13a60d19822d0a54c7d6e63d35c8: Status 404 returned error can't find the container with id d395d3d35cc088d8e873ff86740bf1a3437c13a60d19822d0a54c7d6e63d35c8 Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.934394 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:30 crc kubenswrapper[4844]: E0126 12:46:30.935091 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:31.435077935 +0000 UTC m=+168.368445547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:30 crc kubenswrapper[4844]: I0126 12:46:30.984735 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6zcv5" event={"ID":"f27f4e56-71ef-43e6-be78-20759a8e9ed5","Type":"ContainerStarted","Data":"2e21a138f0c8f011ebc9fd5b1cf25bb5ef49fdb0922dca057c79a408190c2433"} Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.012724 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-982kx" event={"ID":"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70","Type":"ContainerStarted","Data":"a72111891bff6b030c2f006af8dbcdb3dc93eeb5366108178016d8d726c69735"} Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.014352 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hnm5" event={"ID":"1f204088-0679-4c31-bd2b-848fc4f93b21","Type":"ContainerStarted","Data":"d395d3d35cc088d8e873ff86740bf1a3437c13a60d19822d0a54c7d6e63d35c8"} Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.015456 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" event={"ID":"c69496f6-7f67-4cca-9c9f-420e5567b165","Type":"ContainerStarted","Data":"dc09811725218cb38a060b6b387ee5ee04e1fe1075724a2208dac0ff4cd943cd"} Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.041659 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:31 crc kubenswrapper[4844]: E0126 12:46:31.041963 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:31.541947115 +0000 UTC m=+168.475314717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.063947 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bnlhz"] Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.090930 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7fzwr" event={"ID":"43fa0cde-7ba5-4788-be26-1170bf6ee75d","Type":"ContainerStarted","Data":"662f8ff34cab567fdd9eabba89759e04e8ff8fc634c91cdf12c8304f1f8de4b2"} Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.102657 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhjls" event={"ID":"a37a9c59-7c20-4326-b280-9dbd2d633e0b","Type":"ContainerStarted","Data":"ddf41f6ec919716ea44abebdaa9f7bbfb57f26246beef8b7f356a28992d79336"} Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.109293 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5" Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.109333 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.109788 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.131098 4844 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9cmnk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/healthz\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.131174 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" podUID="8f3783e9-776b-434b-8298-59283076969f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.19:8080/healthz\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.131862 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.144153 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:31 crc kubenswrapper[4844]: E0126 12:46:31.144510 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:31.644477235 +0000 UTC m=+168.577844847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.146005 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5" Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.148724 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl" podStartSLOduration=91.148692221 podStartE2EDuration="1m31.148692221s" podCreationTimestamp="2026-01-26 12:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:31.145017789 +0000 UTC m=+168.078385411" watchObservedRunningTime="2026-01-26 12:46:31.148692221 +0000 UTC m=+168.082059843" Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.244510 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qbpjx" podStartSLOduration=144.244495003 podStartE2EDuration="2m24.244495003s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:31.243081258 +0000 UTC m=+168.176448870" watchObservedRunningTime="2026-01-26 12:46:31.244495003 +0000 UTC m=+168.177862615" Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.270352 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:31 crc kubenswrapper[4844]: E0126 12:46:31.274290 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:31.774263032 +0000 UTC m=+168.707630644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.332020 4844 patch_prober.go:28] interesting pod/downloads-7954f5f757-5rkhb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.332088 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5rkhb" podUID="b428addf-b196-461c-aaaf-7b9b14848a6c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.334970 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sbrtp" podStartSLOduration=144.33495599 podStartE2EDuration="2m24.33495599s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:31.33335781 +0000 UTC m=+168.266725422" watchObservedRunningTime="2026-01-26 12:46:31.33495599 +0000 UTC m=+168.268323602" Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.373365 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:31 crc kubenswrapper[4844]: E0126 12:46:31.373703 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:31.873693395 +0000 UTC m=+168.807061007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.381160 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8hpb" podStartSLOduration=144.381141372 podStartE2EDuration="2m24.381141372s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:31.308758421 +0000 UTC m=+168.242126033" watchObservedRunningTime="2026-01-26 12:46:31.381141372 +0000 UTC m=+168.314508984" Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.385010 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-89xb7" podStartSLOduration=144.384995879 podStartE2EDuration="2m24.384995879s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:31.371961192 +0000 UTC m=+168.305328824" watchObservedRunningTime="2026-01-26 12:46:31.384995879 +0000 UTC m=+168.318363501" Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.406113 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4" Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.406175 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4" Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.407308 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-fnd9b" podStartSLOduration=13.407298421 podStartE2EDuration="13.407298421s" podCreationTimestamp="2026-01-26 12:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:31.406502491 +0000 UTC m=+168.339870093" watchObservedRunningTime="2026-01-26 12:46:31.407298421 +0000 UTC m=+168.340666033" Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.444342 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" podStartSLOduration=144.444320813 podStartE2EDuration="2m24.444320813s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:31.442747403 +0000 UTC m=+168.376115025" watchObservedRunningTime="2026-01-26 12:46:31.444320813 +0000 UTC m=+168.377688425" Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.485003 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:31 crc kubenswrapper[4844]: E0126 12:46:31.486142 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:31.986122445 +0000 UTC m=+168.919490057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.514485 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl26p" podStartSLOduration=143.514467729 podStartE2EDuration="2m23.514467729s" podCreationTimestamp="2026-01-26 12:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:31.466629084 +0000 UTC m=+168.399996716" watchObservedRunningTime="2026-01-26 12:46:31.514467729 +0000 UTC m=+168.447835351" Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.516495 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qltc7" podStartSLOduration=144.516481909 podStartE2EDuration="2m24.516481909s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:31.513854103 +0000 UTC m=+168.447221725" watchObservedRunningTime="2026-01-26 12:46:31.516481909 +0000 UTC m=+168.449849541" Jan 26 12:46:31 crc kubenswrapper[4844]: I0126 12:46:31.596110 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:31 crc kubenswrapper[4844]: E0126 12:46:31.596552 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:32.096536424 +0000 UTC m=+169.029904036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.101313 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.101918 4844 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9cmnk container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/healthz\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.101988 4844 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9cmnk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/healthz\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.102012 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" podUID="8f3783e9-776b-434b-8298-59283076969f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.19:8080/healthz\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.102046 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" podUID="8f3783e9-776b-434b-8298-59283076969f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.19:8080/healthz\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.102316 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:32 crc kubenswrapper[4844]: E0126 12:46:32.102993 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.102935171 +0000 UTC m=+170.036302823 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.104087 4844 patch_prober.go:28] interesting pod/downloads-7954f5f757-5rkhb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.105285 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5rkhb" podUID="b428addf-b196-461c-aaaf-7b9b14848a6c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.105703 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" podStartSLOduration=145.105688301 podStartE2EDuration="2m25.105688301s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:31.542548036 +0000 UTC m=+168.475915648" watchObservedRunningTime="2026-01-26 12:46:32.105688301 +0000 UTC m=+169.039055913" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.109362 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rtr85" podStartSLOduration=145.109348943 podStartE2EDuration="2m25.109348943s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:31.561228316 +0000 UTC m=+168.494595928" watchObservedRunningTime="2026-01-26 12:46:32.109348943 +0000 UTC m=+169.042716555" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.110695 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnlhz" event={"ID":"8ddfeacb-de87-47d6-913e-6c2333a7df93","Type":"ContainerStarted","Data":"12509affa0fe7bc7b7696d3a27634ab4649132cec28a469a9190565664f61d54"} Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.114466 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xcs68" podStartSLOduration=145.114450421 podStartE2EDuration="2m25.114450421s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:31.595091668 +0000 UTC m=+168.528459280" watchObservedRunningTime="2026-01-26 12:46:32.114450421 +0000 UTC m=+169.047818033" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.121400 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:46:32 crc kubenswrapper[4844]: [-]has-synced failed: reason withheld Jan 26 12:46:32 crc kubenswrapper[4844]: [+]process-running ok Jan 26 12:46:32 crc kubenswrapper[4844]: healthz check failed Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.121545 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.122554 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pmxvg" event={"ID":"71551b91-3a04-4dcd-9a94-e96b4663b040","Type":"ContainerStarted","Data":"84f59bc2baf21f3d6310c24f2fb06efdccffa924b2a2a1a35b1dfb3cbecc271e"} Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.124325 4844 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9cmnk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/healthz\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.124375 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" podUID="8f3783e9-776b-434b-8298-59283076969f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.19:8080/healthz\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.127674 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-djrt9"] Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.128872 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-djrt9" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.132444 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.138141 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-djrt9"] Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.139015 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ksxk5" podStartSLOduration=145.13899856 podStartE2EDuration="2m25.13899856s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:32.132798253 +0000 UTC m=+169.066165865" watchObservedRunningTime="2026-01-26 12:46:32.13899856 +0000 UTC m=+169.072366172" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.163379 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-5mxl2" podStartSLOduration=14.163360342 podStartE2EDuration="14.163360342s" podCreationTimestamp="2026-01-26 12:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:32.162648565 +0000 UTC m=+169.096016177" watchObservedRunningTime="2026-01-26 12:46:32.163360342 +0000 UTC m=+169.096727954" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.206395 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:32 crc kubenswrapper[4844]: E0126 12:46:32.206573 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:32.7065501 +0000 UTC m=+169.639917712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.207072 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t96w4\" (UniqueName: \"kubernetes.io/projected/637c7ba4-2cae-4d56-860f-ab82722169a2-kube-api-access-t96w4\") pod \"redhat-marketplace-djrt9\" (UID: \"637c7ba4-2cae-4d56-860f-ab82722169a2\") " pod="openshift-marketplace/redhat-marketplace-djrt9" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.207230 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/637c7ba4-2cae-4d56-860f-ab82722169a2-catalog-content\") pod \"redhat-marketplace-djrt9\" (UID: \"637c7ba4-2cae-4d56-860f-ab82722169a2\") " pod="openshift-marketplace/redhat-marketplace-djrt9" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.207365 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/637c7ba4-2cae-4d56-860f-ab82722169a2-utilities\") pod \"redhat-marketplace-djrt9\" (UID: \"637c7ba4-2cae-4d56-860f-ab82722169a2\") " pod="openshift-marketplace/redhat-marketplace-djrt9" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.207588 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:32 crc kubenswrapper[4844]: E0126 12:46:32.210420 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:32.710412667 +0000 UTC m=+169.643780279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.216209 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scvs4" podStartSLOduration=145.216189943 podStartE2EDuration="2m25.216189943s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:32.189983483 +0000 UTC m=+169.123351095" watchObservedRunningTime="2026-01-26 12:46:32.216189943 +0000 UTC m=+169.149557555" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.218482 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-rtks2" podStartSLOduration=145.218472939 podStartE2EDuration="2m25.218472939s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:32.215010013 +0000 UTC m=+169.148377625" watchObservedRunningTime="2026-01-26 12:46:32.218472939 +0000 UTC m=+169.151840551" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.231329 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-zsn9c" podStartSLOduration=145.231310083 podStartE2EDuration="2m25.231310083s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:32.229349133 +0000 UTC m=+169.162716745" watchObservedRunningTime="2026-01-26 12:46:32.231310083 +0000 UTC m=+169.164677705" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.247904 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl5ts" podStartSLOduration=145.24788971 podStartE2EDuration="2m25.24788971s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:32.246757072 +0000 UTC m=+169.180124684" watchObservedRunningTime="2026-01-26 12:46:32.24788971 +0000 UTC m=+169.181257312" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.266702 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bj9c4" podStartSLOduration=145.266683994 podStartE2EDuration="2m25.266683994s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:32.265082983 +0000 UTC m=+169.198450595" watchObservedRunningTime="2026-01-26 12:46:32.266683994 +0000 UTC m=+169.200051606" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.308747 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.309077 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t96w4\" (UniqueName: \"kubernetes.io/projected/637c7ba4-2cae-4d56-860f-ab82722169a2-kube-api-access-t96w4\") pod \"redhat-marketplace-djrt9\" (UID: \"637c7ba4-2cae-4d56-860f-ab82722169a2\") " pod="openshift-marketplace/redhat-marketplace-djrt9" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.309129 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/637c7ba4-2cae-4d56-860f-ab82722169a2-catalog-content\") pod \"redhat-marketplace-djrt9\" (UID: \"637c7ba4-2cae-4d56-860f-ab82722169a2\") " pod="openshift-marketplace/redhat-marketplace-djrt9" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.309163 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/637c7ba4-2cae-4d56-860f-ab82722169a2-utilities\") pod \"redhat-marketplace-djrt9\" (UID: \"637c7ba4-2cae-4d56-860f-ab82722169a2\") " pod="openshift-marketplace/redhat-marketplace-djrt9" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.309718 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/637c7ba4-2cae-4d56-860f-ab82722169a2-utilities\") pod \"redhat-marketplace-djrt9\" (UID: \"637c7ba4-2cae-4d56-860f-ab82722169a2\") " pod="openshift-marketplace/redhat-marketplace-djrt9" Jan 26 12:46:32 crc kubenswrapper[4844]: E0126 12:46:32.309814 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:32.809793019 +0000 UTC m=+169.743160631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.310358 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/637c7ba4-2cae-4d56-860f-ab82722169a2-catalog-content\") pod \"redhat-marketplace-djrt9\" (UID: \"637c7ba4-2cae-4d56-860f-ab82722169a2\") " pod="openshift-marketplace/redhat-marketplace-djrt9" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.318476 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8zmdx"] Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.319612 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zmdx" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.331065 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t96w4\" (UniqueName: \"kubernetes.io/projected/637c7ba4-2cae-4d56-860f-ab82722169a2-kube-api-access-t96w4\") pod \"redhat-marketplace-djrt9\" (UID: \"637c7ba4-2cae-4d56-860f-ab82722169a2\") " pod="openshift-marketplace/redhat-marketplace-djrt9" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.335879 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zmdx"] Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.347926 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfwgn" podStartSLOduration=145.347908758 podStartE2EDuration="2m25.347908758s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:32.347029826 +0000 UTC m=+169.280397468" watchObservedRunningTime="2026-01-26 12:46:32.347908758 +0000 UTC m=+169.281276370" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.410869 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354b9578-ac43-4a15-831f-d6ae0bc5c449-utilities\") pod \"redhat-marketplace-8zmdx\" (UID: \"354b9578-ac43-4a15-831f-d6ae0bc5c449\") " pod="openshift-marketplace/redhat-marketplace-8zmdx" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.411163 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc4l4\" (UniqueName: \"kubernetes.io/projected/354b9578-ac43-4a15-831f-d6ae0bc5c449-kube-api-access-lc4l4\") pod \"redhat-marketplace-8zmdx\" (UID: \"354b9578-ac43-4a15-831f-d6ae0bc5c449\") " pod="openshift-marketplace/redhat-marketplace-8zmdx" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.411556 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354b9578-ac43-4a15-831f-d6ae0bc5c449-catalog-content\") pod \"redhat-marketplace-8zmdx\" (UID: \"354b9578-ac43-4a15-831f-d6ae0bc5c449\") " pod="openshift-marketplace/redhat-marketplace-8zmdx" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.411837 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:32 crc kubenswrapper[4844]: E0126 12:46:32.412258 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:32.912223767 +0000 UTC m=+169.845591379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.450391 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-djrt9" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.514399 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:32 crc kubenswrapper[4844]: E0126 12:46:32.514581 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.014559463 +0000 UTC m=+169.947927075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.515206 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354b9578-ac43-4a15-831f-d6ae0bc5c449-utilities\") pod \"redhat-marketplace-8zmdx\" (UID: \"354b9578-ac43-4a15-831f-d6ae0bc5c449\") " pod="openshift-marketplace/redhat-marketplace-8zmdx" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.515273 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc4l4\" (UniqueName: \"kubernetes.io/projected/354b9578-ac43-4a15-831f-d6ae0bc5c449-kube-api-access-lc4l4\") pod \"redhat-marketplace-8zmdx\" (UID: \"354b9578-ac43-4a15-831f-d6ae0bc5c449\") " pod="openshift-marketplace/redhat-marketplace-8zmdx" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.515367 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354b9578-ac43-4a15-831f-d6ae0bc5c449-catalog-content\") pod \"redhat-marketplace-8zmdx\" (UID: \"354b9578-ac43-4a15-831f-d6ae0bc5c449\") " pod="openshift-marketplace/redhat-marketplace-8zmdx" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.515504 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:32 crc kubenswrapper[4844]: E0126 12:46:32.515790 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.015781224 +0000 UTC m=+169.949148826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.515901 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354b9578-ac43-4a15-831f-d6ae0bc5c449-utilities\") pod \"redhat-marketplace-8zmdx\" (UID: \"354b9578-ac43-4a15-831f-d6ae0bc5c449\") " pod="openshift-marketplace/redhat-marketplace-8zmdx" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.516105 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354b9578-ac43-4a15-831f-d6ae0bc5c449-catalog-content\") pod \"redhat-marketplace-8zmdx\" (UID: \"354b9578-ac43-4a15-831f-d6ae0bc5c449\") " pod="openshift-marketplace/redhat-marketplace-8zmdx" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.532839 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc4l4\" (UniqueName: \"kubernetes.io/projected/354b9578-ac43-4a15-831f-d6ae0bc5c449-kube-api-access-lc4l4\") pod \"redhat-marketplace-8zmdx\" (UID: \"354b9578-ac43-4a15-831f-d6ae0bc5c449\") " pod="openshift-marketplace/redhat-marketplace-8zmdx" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.624518 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:32 crc kubenswrapper[4844]: E0126 12:46:32.624721 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.124696846 +0000 UTC m=+170.058064458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.624826 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:32 crc kubenswrapper[4844]: E0126 12:46:32.625174 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.125166807 +0000 UTC m=+170.058534419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.659376 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zmdx" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.682471 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:46:32 crc kubenswrapper[4844]: [-]has-synced failed: reason withheld Jan 26 12:46:32 crc kubenswrapper[4844]: [+]process-running ok Jan 26 12:46:32 crc kubenswrapper[4844]: healthz check failed Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.682526 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.725561 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:32 crc kubenswrapper[4844]: E0126 12:46:32.725719 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.225698748 +0000 UTC m=+170.159066360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.725808 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:32 crc kubenswrapper[4844]: E0126 12:46:32.726188 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.226160669 +0000 UTC m=+170.159528281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.726568 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-djrt9"] Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.827291 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:32 crc kubenswrapper[4844]: E0126 12:46:32.827526 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.3274933 +0000 UTC m=+170.260860952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.827804 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:32 crc kubenswrapper[4844]: E0126 12:46:32.828279 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.32825896 +0000 UTC m=+170.261626602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.924413 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8hdq2"] Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.925639 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8hdq2" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.929184 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:32 crc kubenswrapper[4844]: E0126 12:46:32.930109 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.430038372 +0000 UTC m=+170.363406014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.931666 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 12:46:32 crc kubenswrapper[4844]: I0126 12:46:32.939308 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8hdq2"] Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.032324 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d60e5f01-76f1-47a0-8a7d-390457ce1b47-utilities\") pod \"redhat-operators-8hdq2\" (UID: \"d60e5f01-76f1-47a0-8a7d-390457ce1b47\") " pod="openshift-marketplace/redhat-operators-8hdq2" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.032773 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.032828 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d60e5f01-76f1-47a0-8a7d-390457ce1b47-catalog-content\") pod \"redhat-operators-8hdq2\" (UID: \"d60e5f01-76f1-47a0-8a7d-390457ce1b47\") " pod="openshift-marketplace/redhat-operators-8hdq2" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.032963 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w7zk\" (UniqueName: \"kubernetes.io/projected/d60e5f01-76f1-47a0-8a7d-390457ce1b47-kube-api-access-7w7zk\") pod \"redhat-operators-8hdq2\" (UID: \"d60e5f01-76f1-47a0-8a7d-390457ce1b47\") " pod="openshift-marketplace/redhat-operators-8hdq2" Jan 26 12:46:33 crc kubenswrapper[4844]: E0126 12:46:33.033446 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.533414264 +0000 UTC m=+170.466781926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.129263 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc" event={"ID":"94726f3c-782c-4f4c-89cc-60229b8f339a","Type":"ContainerStarted","Data":"5c701aa97a3729207433655fdfdfb6fc50d5a447c4fc5da2307f1620e92af5ec"} Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.139093 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:33 crc kubenswrapper[4844]: E0126 12:46:33.139474 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.639456922 +0000 UTC m=+170.572824534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.139578 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d60e5f01-76f1-47a0-8a7d-390457ce1b47-catalog-content\") pod \"redhat-operators-8hdq2\" (UID: \"d60e5f01-76f1-47a0-8a7d-390457ce1b47\") " pod="openshift-marketplace/redhat-operators-8hdq2" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.139755 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w7zk\" (UniqueName: \"kubernetes.io/projected/d60e5f01-76f1-47a0-8a7d-390457ce1b47-kube-api-access-7w7zk\") pod \"redhat-operators-8hdq2\" (UID: \"d60e5f01-76f1-47a0-8a7d-390457ce1b47\") " pod="openshift-marketplace/redhat-operators-8hdq2" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.139790 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d60e5f01-76f1-47a0-8a7d-390457ce1b47-utilities\") pod \"redhat-operators-8hdq2\" (UID: \"d60e5f01-76f1-47a0-8a7d-390457ce1b47\") " pod="openshift-marketplace/redhat-operators-8hdq2" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.139832 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:33 crc kubenswrapper[4844]: E0126 12:46:33.141077 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.641048072 +0000 UTC m=+170.574415694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.141253 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d60e5f01-76f1-47a0-8a7d-390457ce1b47-catalog-content\") pod \"redhat-operators-8hdq2\" (UID: \"d60e5f01-76f1-47a0-8a7d-390457ce1b47\") " pod="openshift-marketplace/redhat-operators-8hdq2" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.141398 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d60e5f01-76f1-47a0-8a7d-390457ce1b47-utilities\") pod \"redhat-operators-8hdq2\" (UID: \"d60e5f01-76f1-47a0-8a7d-390457ce1b47\") " pod="openshift-marketplace/redhat-operators-8hdq2" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.174134 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w7zk\" (UniqueName: \"kubernetes.io/projected/d60e5f01-76f1-47a0-8a7d-390457ce1b47-kube-api-access-7w7zk\") pod \"redhat-operators-8hdq2\" (UID: \"d60e5f01-76f1-47a0-8a7d-390457ce1b47\") " pod="openshift-marketplace/redhat-operators-8hdq2" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.240869 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:33 crc kubenswrapper[4844]: E0126 12:46:33.241415 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.741384218 +0000 UTC m=+170.674751830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.242320 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:33 crc kubenswrapper[4844]: E0126 12:46:33.243327 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.743310966 +0000 UTC m=+170.676678678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.259351 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8hdq2" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.336237 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dn4m8"] Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.338701 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dn4m8" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.349177 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:33 crc kubenswrapper[4844]: E0126 12:46:33.350007 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.849970402 +0000 UTC m=+170.783338044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.353745 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0ca290-d48e-4c46-8c36-1e414126c42f-utilities\") pod \"redhat-operators-dn4m8\" (UID: \"2a0ca290-d48e-4c46-8c36-1e414126c42f\") " pod="openshift-marketplace/redhat-operators-dn4m8" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.353800 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8hw6\" (UniqueName: \"kubernetes.io/projected/2a0ca290-d48e-4c46-8c36-1e414126c42f-kube-api-access-h8hw6\") pod \"redhat-operators-dn4m8\" (UID: \"2a0ca290-d48e-4c46-8c36-1e414126c42f\") " pod="openshift-marketplace/redhat-operators-dn4m8" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.353932 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0ca290-d48e-4c46-8c36-1e414126c42f-catalog-content\") pod \"redhat-operators-dn4m8\" (UID: \"2a0ca290-d48e-4c46-8c36-1e414126c42f\") " pod="openshift-marketplace/redhat-operators-dn4m8" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.353988 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.354058 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dn4m8"] Jan 26 12:46:33 crc kubenswrapper[4844]: E0126 12:46:33.357701 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.857686096 +0000 UTC m=+170.791053718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.458669 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.458930 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0ca290-d48e-4c46-8c36-1e414126c42f-utilities\") pod \"redhat-operators-dn4m8\" (UID: \"2a0ca290-d48e-4c46-8c36-1e414126c42f\") " pod="openshift-marketplace/redhat-operators-dn4m8" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.458961 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8hw6\" (UniqueName: \"kubernetes.io/projected/2a0ca290-d48e-4c46-8c36-1e414126c42f-kube-api-access-h8hw6\") pod \"redhat-operators-dn4m8\" (UID: \"2a0ca290-d48e-4c46-8c36-1e414126c42f\") " pod="openshift-marketplace/redhat-operators-dn4m8" Jan 26 12:46:33 crc kubenswrapper[4844]: E0126 12:46:33.459028 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.958978035 +0000 UTC m=+170.892345687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.459105 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0ca290-d48e-4c46-8c36-1e414126c42f-catalog-content\") pod \"redhat-operators-dn4m8\" (UID: \"2a0ca290-d48e-4c46-8c36-1e414126c42f\") " pod="openshift-marketplace/redhat-operators-dn4m8" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.459268 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.459720 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0ca290-d48e-4c46-8c36-1e414126c42f-utilities\") pod \"redhat-operators-dn4m8\" (UID: \"2a0ca290-d48e-4c46-8c36-1e414126c42f\") " pod="openshift-marketplace/redhat-operators-dn4m8" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.459739 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0ca290-d48e-4c46-8c36-1e414126c42f-catalog-content\") pod \"redhat-operators-dn4m8\" (UID: \"2a0ca290-d48e-4c46-8c36-1e414126c42f\") " pod="openshift-marketplace/redhat-operators-dn4m8" Jan 26 12:46:33 crc kubenswrapper[4844]: E0126 12:46:33.459919 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:33.959901619 +0000 UTC m=+170.893269271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.482276 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8hw6\" (UniqueName: \"kubernetes.io/projected/2a0ca290-d48e-4c46-8c36-1e414126c42f-kube-api-access-h8hw6\") pod \"redhat-operators-dn4m8\" (UID: \"2a0ca290-d48e-4c46-8c36-1e414126c42f\") " pod="openshift-marketplace/redhat-operators-dn4m8" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.564007 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:33 crc kubenswrapper[4844]: E0126 12:46:33.564857 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:34.06483499 +0000 UTC m=+170.998202602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.666064 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:33 crc kubenswrapper[4844]: E0126 12:46:33.666411 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:34.166395877 +0000 UTC m=+171.099763489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.669285 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dn4m8" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.682629 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:46:33 crc kubenswrapper[4844]: [-]has-synced failed: reason withheld Jan 26 12:46:33 crc kubenswrapper[4844]: [+]process-running ok Jan 26 12:46:33 crc kubenswrapper[4844]: healthz check failed Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.682682 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.767231 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:33 crc kubenswrapper[4844]: E0126 12:46:33.767510 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:34.267494391 +0000 UTC m=+171.200862003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.828801 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zmdx"] Jan 26 12:46:33 crc kubenswrapper[4844]: W0126 12:46:33.835285 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod354b9578_ac43_4a15_831f_d6ae0bc5c449.slice/crio-41875b1ca388b4eea68433a1f3b4f41fd22f1f345e6270fd0a9053edaf170c42 WatchSource:0}: Error finding container 41875b1ca388b4eea68433a1f3b4f41fd22f1f345e6270fd0a9053edaf170c42: Status 404 returned error can't find the container with id 41875b1ca388b4eea68433a1f3b4f41fd22f1f345e6270fd0a9053edaf170c42 Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.869732 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:33 crc kubenswrapper[4844]: E0126 12:46:33.869995 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:34.369980561 +0000 UTC m=+171.303348173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.871710 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8hdq2"] Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.903863 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dn4m8"] Jan 26 12:46:33 crc kubenswrapper[4844]: W0126 12:46:33.916297 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a0ca290_d48e_4c46_8c36_1e414126c42f.slice/crio-e95a1af869504bec027d5bd3be38c143eaf95185af0cf44b85ac7e3541cc025b WatchSource:0}: Error finding container e95a1af869504bec027d5bd3be38c143eaf95185af0cf44b85ac7e3541cc025b: Status 404 returned error can't find the container with id e95a1af869504bec027d5bd3be38c143eaf95185af0cf44b85ac7e3541cc025b Jan 26 12:46:33 crc kubenswrapper[4844]: I0126 12:46:33.970407 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:33 crc kubenswrapper[4844]: E0126 12:46:33.970821 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:34.470776618 +0000 UTC m=+171.404144630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.073022 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:34 crc kubenswrapper[4844]: E0126 12:46:34.073722 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:34.573693388 +0000 UTC m=+171.507061030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.137219 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dn4m8" event={"ID":"2a0ca290-d48e-4c46-8c36-1e414126c42f","Type":"ContainerStarted","Data":"e95a1af869504bec027d5bd3be38c143eaf95185af0cf44b85ac7e3541cc025b"} Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.139178 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zmdx" event={"ID":"354b9578-ac43-4a15-831f-d6ae0bc5c449","Type":"ContainerStarted","Data":"41875b1ca388b4eea68433a1f3b4f41fd22f1f345e6270fd0a9053edaf170c42"} Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.142634 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-djrt9" event={"ID":"637c7ba4-2cae-4d56-860f-ab82722169a2","Type":"ContainerStarted","Data":"2b8c0b752822432cdf6de68d71dc8c1bf82c8b4db91b6e57c38c243415ba2a9e"} Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.144491 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hdq2" event={"ID":"d60e5f01-76f1-47a0-8a7d-390457ce1b47","Type":"ContainerStarted","Data":"58e1012f91986119fa18986fc54d6c3054e57becf30854dc277b3bc2306a0315"} Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.174827 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:34 crc kubenswrapper[4844]: E0126 12:46:34.175069 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:34.67501645 +0000 UTC m=+171.608384072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.275820 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:34 crc kubenswrapper[4844]: E0126 12:46:34.276467 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:34.776436982 +0000 UTC m=+171.709804594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.376760 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:34 crc kubenswrapper[4844]: E0126 12:46:34.376949 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:34.876925622 +0000 UTC m=+171.810293234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.377304 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:34 crc kubenswrapper[4844]: E0126 12:46:34.377657 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:34.877649431 +0000 UTC m=+171.811017043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.481621 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:34 crc kubenswrapper[4844]: E0126 12:46:34.481769 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:34.98174882 +0000 UTC m=+171.915116432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.482134 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:34 crc kubenswrapper[4844]: E0126 12:46:34.482451 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:34.982439367 +0000 UTC m=+171.915806979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.584129 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:34 crc kubenswrapper[4844]: E0126 12:46:34.584343 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:35.084313612 +0000 UTC m=+172.017681224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.584693 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:34 crc kubenswrapper[4844]: E0126 12:46:34.585020 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:35.08500996 +0000 UTC m=+172.018377572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.682316 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:46:34 crc kubenswrapper[4844]: [-]has-synced failed: reason withheld Jan 26 12:46:34 crc kubenswrapper[4844]: [+]process-running ok Jan 26 12:46:34 crc kubenswrapper[4844]: healthz check failed Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.682408 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.685990 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:34 crc kubenswrapper[4844]: E0126 12:46:34.686247 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:35.186214537 +0000 UTC m=+172.119582199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.686337 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:34 crc kubenswrapper[4844]: E0126 12:46:34.686786 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:35.186766921 +0000 UTC m=+172.120134533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.778170 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.778254 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.787156 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.787551 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:34 crc kubenswrapper[4844]: E0126 12:46:34.787698 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:35.287676611 +0000 UTC m=+172.221044223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.788003 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:34 crc kubenswrapper[4844]: E0126 12:46:34.788297 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:35.288287087 +0000 UTC m=+172.221654699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.840835 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.841802 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.843878 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.844381 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.850516 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.889734 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.889987 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.890032 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:46:34 crc kubenswrapper[4844]: E0126 12:46:34.890332 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:35.390290294 +0000 UTC m=+172.323657896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.991812 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.992346 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:46:34 crc kubenswrapper[4844]: E0126 12:46:34.992450 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:35.492422255 +0000 UTC m=+172.425790067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.992531 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:46:34 crc kubenswrapper[4844]: I0126 12:46:34.992525 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.011389 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.093980 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:35 crc kubenswrapper[4844]: E0126 12:46:35.094361 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:35.59434199 +0000 UTC m=+172.527709622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.156303 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-982kx" event={"ID":"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70","Type":"ContainerStarted","Data":"c1e1ff99d7536b1b6d1127405a72c5e21ddbb3f138c1a788fce7003e2cde1af8"} Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.161345 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-rtks2" Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.164530 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.195925 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:35 crc kubenswrapper[4844]: E0126 12:46:35.197378 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:35.697364064 +0000 UTC m=+172.630731676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.297319 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:35 crc kubenswrapper[4844]: E0126 12:46:35.297455 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:35.797428392 +0000 UTC m=+172.730796004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.297682 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:35 crc kubenswrapper[4844]: E0126 12:46:35.297952 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:35.797945066 +0000 UTC m=+172.731312678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.398833 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:35 crc kubenswrapper[4844]: E0126 12:46:35.398985 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:35.898963778 +0000 UTC m=+172.832331390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.399728 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:35 crc kubenswrapper[4844]: E0126 12:46:35.400066 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:35.900052906 +0000 UTC m=+172.833420518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.414947 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.500147 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:35 crc kubenswrapper[4844]: E0126 12:46:35.500506 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:36.000474824 +0000 UTC m=+172.933842436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.500623 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:35 crc kubenswrapper[4844]: E0126 12:46:35.500904 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:36.000891894 +0000 UTC m=+172.934259506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.601839 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:35 crc kubenswrapper[4844]: E0126 12:46:35.601950 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:36.101929237 +0000 UTC m=+173.035296849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.602238 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:35 crc kubenswrapper[4844]: E0126 12:46:35.602682 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:36.102662155 +0000 UTC m=+173.036029767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.682371 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:46:35 crc kubenswrapper[4844]: [-]has-synced failed: reason withheld Jan 26 12:46:35 crc kubenswrapper[4844]: [+]process-running ok Jan 26 12:46:35 crc kubenswrapper[4844]: healthz check failed Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.682435 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.703034 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:35 crc kubenswrapper[4844]: E0126 12:46:35.703299 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:36.203273288 +0000 UTC m=+173.136640940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.703722 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:35 crc kubenswrapper[4844]: E0126 12:46:35.704274 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:36.204256053 +0000 UTC m=+173.137623705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.804511 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:35 crc kubenswrapper[4844]: E0126 12:46:35.804752 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:36.304713741 +0000 UTC m=+173.238081393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.804865 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:35 crc kubenswrapper[4844]: E0126 12:46:35.805273 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:36.305256965 +0000 UTC m=+173.238624607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.817376 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.817511 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.829800 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.905514 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:35 crc kubenswrapper[4844]: E0126 12:46:35.905732 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:36.405705543 +0000 UTC m=+173.339073165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:35 crc kubenswrapper[4844]: I0126 12:46:35.906020 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:35 crc kubenswrapper[4844]: E0126 12:46:35.906385 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:36.40636905 +0000 UTC m=+173.339736682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.006950 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:36 crc kubenswrapper[4844]: E0126 12:46:36.007451 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:36.507386313 +0000 UTC m=+173.440753965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.108847 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:36 crc kubenswrapper[4844]: E0126 12:46:36.109207 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:36.609191615 +0000 UTC m=+173.542559217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.159512 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75","Type":"ContainerStarted","Data":"29394b6af1d99198f4e2615d72ead3bc5f127650b8a110261aef78d68c474a8f"} Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.168053 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9vvp" Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.210760 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:36 crc kubenswrapper[4844]: E0126 12:46:36.211035 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:36.710981018 +0000 UTC m=+173.644348640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.211450 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:36 crc kubenswrapper[4844]: E0126 12:46:36.213409 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:36.713385019 +0000 UTC m=+173.646752661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.313076 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:36 crc kubenswrapper[4844]: E0126 12:46:36.313270 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:36.813242362 +0000 UTC m=+173.746609994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.313366 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:36 crc kubenswrapper[4844]: E0126 12:46:36.313662 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:36.813651642 +0000 UTC m=+173.747019254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.365351 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.365413 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.413932 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:36 crc kubenswrapper[4844]: E0126 12:46:36.414292 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:36.914279145 +0000 UTC m=+173.847646757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.515673 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:36 crc kubenswrapper[4844]: E0126 12:46:36.516050 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:37.016031046 +0000 UTC m=+173.949398678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.616874 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:36 crc kubenswrapper[4844]: E0126 12:46:36.617081 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:37.117050559 +0000 UTC m=+174.050418171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.617212 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:36 crc kubenswrapper[4844]: E0126 12:46:36.617530 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:37.11751452 +0000 UTC m=+174.050882142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.682242 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:46:36 crc kubenswrapper[4844]: [-]has-synced failed: reason withheld Jan 26 12:46:36 crc kubenswrapper[4844]: [+]process-running ok Jan 26 12:46:36 crc kubenswrapper[4844]: healthz check failed Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.682297 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.718363 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:36 crc kubenswrapper[4844]: E0126 12:46:36.718708 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:37.218681508 +0000 UTC m=+174.152049130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.718780 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:36 crc kubenswrapper[4844]: E0126 12:46:36.719086 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:37.219070077 +0000 UTC m=+174.152437709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.820124 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:36 crc kubenswrapper[4844]: E0126 12:46:36.820341 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:37.320314185 +0000 UTC m=+174.253681817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.820457 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:36 crc kubenswrapper[4844]: E0126 12:46:36.820833 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:37.320821199 +0000 UTC m=+174.254188821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:36 crc kubenswrapper[4844]: I0126 12:46:36.921361 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:36 crc kubenswrapper[4844]: E0126 12:46:36.921777 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:37.421759269 +0000 UTC m=+174.355126891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.023718 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:37 crc kubenswrapper[4844]: E0126 12:46:37.024814 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:37.524143486 +0000 UTC m=+174.457511098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.125019 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:37 crc kubenswrapper[4844]: E0126 12:46:37.125508 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:37.625489877 +0000 UTC m=+174.558857499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.165284 4844 generic.go:334] "Generic (PLEG): container finished" podID="1b7b1cea-f94c-4750-8db8-18d9b7f9fb70" containerID="c1e1ff99d7536b1b6d1127405a72c5e21ddbb3f138c1a788fce7003e2cde1af8" exitCode=0 Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.166074 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-982kx" event={"ID":"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70","Type":"ContainerDied","Data":"c1e1ff99d7536b1b6d1127405a72c5e21ddbb3f138c1a788fce7003e2cde1af8"} Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.226276 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:37 crc kubenswrapper[4844]: E0126 12:46:37.226823 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:37.726807807 +0000 UTC m=+174.660175419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.327634 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:37 crc kubenswrapper[4844]: E0126 12:46:37.327904 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:37.827889702 +0000 UTC m=+174.761257314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.429764 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:37 crc kubenswrapper[4844]: E0126 12:46:37.430183 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:37.930168017 +0000 UTC m=+174.863535619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.456882 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.458140 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.460087 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.460720 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.461047 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.531252 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:37 crc kubenswrapper[4844]: E0126 12:46:37.531428 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:38.031404325 +0000 UTC m=+174.964771937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.531526 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f14c711e-6ba8-4e74-99e5-b106b5caca49-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f14c711e-6ba8-4e74-99e5-b106b5caca49\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.531575 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.531650 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f14c711e-6ba8-4e74-99e5-b106b5caca49-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f14c711e-6ba8-4e74-99e5-b106b5caca49\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:46:37 crc kubenswrapper[4844]: E0126 12:46:37.531926 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:38.031917077 +0000 UTC m=+174.965284689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.632911 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.633231 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f14c711e-6ba8-4e74-99e5-b106b5caca49-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f14c711e-6ba8-4e74-99e5-b106b5caca49\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.633270 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f14c711e-6ba8-4e74-99e5-b106b5caca49-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f14c711e-6ba8-4e74-99e5-b106b5caca49\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:46:37 crc kubenswrapper[4844]: E0126 12:46:37.633658 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:38.133644328 +0000 UTC m=+175.067011940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.633693 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f14c711e-6ba8-4e74-99e5-b106b5caca49-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f14c711e-6ba8-4e74-99e5-b106b5caca49\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.661291 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f14c711e-6ba8-4e74-99e5-b106b5caca49-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f14c711e-6ba8-4e74-99e5-b106b5caca49\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.681564 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:46:37 crc kubenswrapper[4844]: [-]has-synced failed: reason withheld Jan 26 12:46:37 crc kubenswrapper[4844]: [+]process-running ok Jan 26 12:46:37 crc kubenswrapper[4844]: healthz check failed Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.681663 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.734765 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:37 crc kubenswrapper[4844]: E0126 12:46:37.735273 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:38.235252256 +0000 UTC m=+175.168619888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.780426 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.836406 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:37 crc kubenswrapper[4844]: E0126 12:46:37.836652 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:38.336616607 +0000 UTC m=+175.269984259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.836926 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:37 crc kubenswrapper[4844]: E0126 12:46:37.837492 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:38.337466339 +0000 UTC m=+175.270833991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.938455 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:37 crc kubenswrapper[4844]: E0126 12:46:37.938706 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:38.438686236 +0000 UTC m=+175.372053848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:37 crc kubenswrapper[4844]: I0126 12:46:37.938981 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:37 crc kubenswrapper[4844]: E0126 12:46:37.939347 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:38.439338222 +0000 UTC m=+175.372705834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.026155 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.041300 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:38 crc kubenswrapper[4844]: E0126 12:46:38.041614 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:38.541581886 +0000 UTC m=+175.474949498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.142446 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:38 crc kubenswrapper[4844]: E0126 12:46:38.142801 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:38.642788624 +0000 UTC m=+175.576156236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.174152 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hnm5" event={"ID":"1f204088-0679-4c31-bd2b-848fc4f93b21","Type":"ContainerStarted","Data":"938d678e79dc74debbf11928bc5ab4b890aebb43e137ea8049db0561bd0b2da2"} Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.175397 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f14c711e-6ba8-4e74-99e5-b106b5caca49","Type":"ContainerStarted","Data":"53b4ac8de8d4f765b6b3bf68e0d9bf817f18cf68a89147a8f48d6b55ca255b49"} Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.177876 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sgslp" event={"ID":"3f2d657c-0a0d-4671-a720-ef689ccf2120","Type":"ContainerStarted","Data":"3bb265bfc22189a32ddb700c58d7f6c1ea8e5e134f69722a5a33f81c95422734"} Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.179438 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhjls" event={"ID":"a37a9c59-7c20-4326-b280-9dbd2d633e0b","Type":"ContainerStarted","Data":"cfe1f5826adb5e70eca6e69b3a2d46e940585099c1e8c130e79e5312de77dc33"} Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.197077 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6zcv5" podStartSLOduration=151.197054859 podStartE2EDuration="2m31.197054859s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:38.194809454 +0000 UTC m=+175.128177106" watchObservedRunningTime="2026-01-26 12:46:38.197054859 +0000 UTC m=+175.130422481" Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.243735 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:38 crc kubenswrapper[4844]: E0126 12:46:38.244095 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:38.744058843 +0000 UTC m=+175.677426475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.245227 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:38 crc kubenswrapper[4844]: E0126 12:46:38.247252 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:38.747240803 +0000 UTC m=+175.680608425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.346328 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:38 crc kubenswrapper[4844]: E0126 12:46:38.346905 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:38.846884741 +0000 UTC m=+175.780252363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.347098 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:38 crc kubenswrapper[4844]: E0126 12:46:38.347447 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:38.847436265 +0000 UTC m=+175.780803887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.449862 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:38 crc kubenswrapper[4844]: E0126 12:46:38.450693 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:38.950581952 +0000 UTC m=+175.883949574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.551794 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:38 crc kubenswrapper[4844]: E0126 12:46:38.552330 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:39.052315702 +0000 UTC m=+175.985683314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.653053 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:38 crc kubenswrapper[4844]: E0126 12:46:38.653451 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:39.153428067 +0000 UTC m=+176.086795689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.653503 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:38 crc kubenswrapper[4844]: E0126 12:46:38.654200 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:39.154180476 +0000 UTC m=+176.087548088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.692410 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:46:38 crc kubenswrapper[4844]: [-]has-synced failed: reason withheld Jan 26 12:46:38 crc kubenswrapper[4844]: [+]process-running ok Jan 26 12:46:38 crc kubenswrapper[4844]: healthz check failed Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.692463 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.761020 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:38 crc kubenswrapper[4844]: E0126 12:46:38.761245 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:39.2612129 +0000 UTC m=+176.194580532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.761675 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:38 crc kubenswrapper[4844]: E0126 12:46:38.761950 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:39.261938829 +0000 UTC m=+176.195306441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.818149 4844 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.863518 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:38 crc kubenswrapper[4844]: E0126 12:46:38.863717 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:39.363689431 +0000 UTC m=+176.297057043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.863824 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:38 crc kubenswrapper[4844]: E0126 12:46:38.864124 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:39.364111781 +0000 UTC m=+176.297479393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.964693 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:38 crc kubenswrapper[4844]: E0126 12:46:38.964886 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:39.464856927 +0000 UTC m=+176.398224539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:38 crc kubenswrapper[4844]: I0126 12:46:38.965314 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:38 crc kubenswrapper[4844]: E0126 12:46:38.965658 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:39.465647486 +0000 UTC m=+176.399015098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.067008 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:39 crc kubenswrapper[4844]: E0126 12:46:39.067175 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:39.567153532 +0000 UTC m=+176.500521154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.067372 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:39 crc kubenswrapper[4844]: E0126 12:46:39.067835 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:39.567822738 +0000 UTC m=+176.501190350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.168513 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:39 crc kubenswrapper[4844]: E0126 12:46:39.168722 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:39.668695897 +0000 UTC m=+176.602063509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.168832 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:39 crc kubenswrapper[4844]: E0126 12:46:39.169154 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:39.669146839 +0000 UTC m=+176.602514451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.186408 4844 generic.go:334] "Generic (PLEG): container finished" podID="a37a9c59-7c20-4326-b280-9dbd2d633e0b" containerID="cfe1f5826adb5e70eca6e69b3a2d46e940585099c1e8c130e79e5312de77dc33" exitCode=0 Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.186495 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhjls" event={"ID":"a37a9c59-7c20-4326-b280-9dbd2d633e0b","Type":"ContainerDied","Data":"cfe1f5826adb5e70eca6e69b3a2d46e940585099c1e8c130e79e5312de77dc33"} Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.188908 4844 generic.go:334] "Generic (PLEG): container finished" podID="8ddfeacb-de87-47d6-913e-6c2333a7df93" containerID="06f2cea9bc9ade7d2c232187e6bbf792b20d8e3c442b073f0f685cdcdd43972d" exitCode=0 Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.188960 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnlhz" event={"ID":"8ddfeacb-de87-47d6-913e-6c2333a7df93","Type":"ContainerDied","Data":"06f2cea9bc9ade7d2c232187e6bbf792b20d8e3c442b073f0f685cdcdd43972d"} Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.189527 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.191690 4844 generic.go:334] "Generic (PLEG): container finished" podID="d60e5f01-76f1-47a0-8a7d-390457ce1b47" containerID="f8b54dd269f366df04fc16928a0bc3b77009ecace479a1dfc5409e8affd98604" exitCode=0 Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.191760 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hdq2" event={"ID":"d60e5f01-76f1-47a0-8a7d-390457ce1b47","Type":"ContainerDied","Data":"f8b54dd269f366df04fc16928a0bc3b77009ecace479a1dfc5409e8affd98604"} Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.194074 4844 generic.go:334] "Generic (PLEG): container finished" podID="2a0ca290-d48e-4c46-8c36-1e414126c42f" containerID="bfe63e859e48fcf824a06504bfc9bcf6807a460ed1665a325ef8ddb893ad001f" exitCode=0 Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.194142 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dn4m8" event={"ID":"2a0ca290-d48e-4c46-8c36-1e414126c42f","Type":"ContainerDied","Data":"bfe63e859e48fcf824a06504bfc9bcf6807a460ed1665a325ef8ddb893ad001f"} Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.196726 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" event={"ID":"c69496f6-7f67-4cca-9c9f-420e5567b165","Type":"ContainerStarted","Data":"c7b5c4a82e39849aee7461dbcbee31e20e320caa2c0ce5900885b11f601a0a85"} Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.201336 4844 generic.go:334] "Generic (PLEG): container finished" podID="0b95a697-eeb9-444d-83ed-3484a41f5dd1" containerID="174c56e0839b5e5dce7465d4fb7c8f05272878d2f83732f894eaf8713e0f80db" exitCode=0 Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.201410 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl" event={"ID":"0b95a697-eeb9-444d-83ed-3484a41f5dd1","Type":"ContainerDied","Data":"174c56e0839b5e5dce7465d4fb7c8f05272878d2f83732f894eaf8713e0f80db"} Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.203761 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75","Type":"ContainerStarted","Data":"92ff6673546c617a00e027e3dbfa11ea41ab86dbf13deeef94cbfb006bde086e"} Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.210336 4844 generic.go:334] "Generic (PLEG): container finished" podID="1f204088-0679-4c31-bd2b-848fc4f93b21" containerID="938d678e79dc74debbf11928bc5ab4b890aebb43e137ea8049db0561bd0b2da2" exitCode=0 Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.210396 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hnm5" event={"ID":"1f204088-0679-4c31-bd2b-848fc4f93b21","Type":"ContainerDied","Data":"938d678e79dc74debbf11928bc5ab4b890aebb43e137ea8049db0561bd0b2da2"} Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.213541 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-75rtp" event={"ID":"49ce2590-a0c6-4e75-af35-73bb211e6829","Type":"ContainerStarted","Data":"9dfa281f8904ec494f5b226c37a69026e32d38e8db6d6c50ea88ef25c9b3d951"} Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.218749 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" event={"ID":"8cccdbda-6833-4c8f-b709-ab1f617e2153","Type":"ContainerStarted","Data":"394d4fc0354311297616f12a4b60c20196033e29540652021dc255fa57ab510f"} Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.227549 4844 generic.go:334] "Generic (PLEG): container finished" podID="637c7ba4-2cae-4d56-860f-ab82722169a2" containerID="a6de43053e99ae8a42f4c96cac94a588675aeae61cfd1b879315b5c949fdccd1" exitCode=0 Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.227753 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-djrt9" event={"ID":"637c7ba4-2cae-4d56-860f-ab82722169a2","Type":"ContainerDied","Data":"a6de43053e99ae8a42f4c96cac94a588675aeae61cfd1b879315b5c949fdccd1"} Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.240615 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-r8j24" event={"ID":"d864ad06-5a3e-4f38-a16a-22de2e50ce8c","Type":"ContainerStarted","Data":"27d5512f5cb84a5d7ebfa21bcfffe12cb7cdef40777533dab058447460d0e39e"} Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.241308 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-r8j24" Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.257180 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-r8j24" Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.264626 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f14c711e-6ba8-4e74-99e5-b106b5caca49","Type":"ContainerStarted","Data":"b4175a87a100aa32283ebe237fe04f6880b57ccd013ac80c26062fe717d52e5d"} Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.270285 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.270660 4844 generic.go:334] "Generic (PLEG): container finished" podID="354b9578-ac43-4a15-831f-d6ae0bc5c449" containerID="ebce76073ebe8e8e5c7894d5e2235ae3f5a6c42f07370df802e160ce4920cee0" exitCode=0 Jan 26 12:46:39 crc kubenswrapper[4844]: E0126 12:46:39.271614 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:39.771565557 +0000 UTC m=+176.704933179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.271795 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zmdx" event={"ID":"354b9578-ac43-4a15-831f-d6ae0bc5c449","Type":"ContainerDied","Data":"ebce76073ebe8e8e5c7894d5e2235ae3f5a6c42f07370df802e160ce4920cee0"} Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.273435 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pmxvg" Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.288364 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=5.288340949 podStartE2EDuration="5.288340949s" podCreationTimestamp="2026-01-26 12:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:39.286517654 +0000 UTC m=+176.219885266" watchObservedRunningTime="2026-01-26 12:46:39.288340949 +0000 UTC m=+176.221708561" Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.371797 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:39 crc kubenswrapper[4844]: E0126 12:46:39.372207 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 12:46:39.87218571 +0000 UTC m=+176.805553382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dwwm9" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.395965 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-75rtp" podStartSLOduration=152.395946668 podStartE2EDuration="2m32.395946668s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:39.393770103 +0000 UTC m=+176.327137715" watchObservedRunningTime="2026-01-26 12:46:39.395946668 +0000 UTC m=+176.329314280" Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.414643 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpxdc" podStartSLOduration=152.414628708 podStartE2EDuration="2m32.414628708s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:39.411981391 +0000 UTC m=+176.345349003" watchObservedRunningTime="2026-01-26 12:46:39.414628708 +0000 UTC m=+176.347996320" Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.465416 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-r8j24" podStartSLOduration=21.465398715 podStartE2EDuration="21.465398715s" podCreationTimestamp="2026-01-26 12:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:39.461851407 +0000 UTC m=+176.395219039" watchObservedRunningTime="2026-01-26 12:46:39.465398715 +0000 UTC m=+176.398766327" Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.473037 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:39 crc kubenswrapper[4844]: E0126 12:46:39.473405 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 12:46:39.973390657 +0000 UTC m=+176.906758269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.492478 4844 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T12:46:38.818175804Z","Handler":null,"Name":""} Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.507705 4844 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.507763 4844 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.561396 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.561378702 podStartE2EDuration="2.561378702s" podCreationTimestamp="2026-01-26 12:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:39.55693601 +0000 UTC m=+176.490303622" watchObservedRunningTime="2026-01-26 12:46:39.561378702 +0000 UTC m=+176.494746314" Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.574511 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.608827 4844 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.608899 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.621521 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sgslp" podStartSLOduration=152.621502745 podStartE2EDuration="2m32.621502745s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:39.619384662 +0000 UTC m=+176.552752274" watchObservedRunningTime="2026-01-26 12:46:39.621502745 +0000 UTC m=+176.554870367" Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.683982 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:46:39 crc kubenswrapper[4844]: [-]has-synced failed: reason withheld Jan 26 12:46:39 crc kubenswrapper[4844]: [+]process-running ok Jan 26 12:46:39 crc kubenswrapper[4844]: healthz check failed Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.684041 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.761972 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dwwm9\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.769233 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pmxvg" podStartSLOduration=152.769213603 podStartE2EDuration="2m32.769213603s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:39.768545007 +0000 UTC m=+176.701912639" watchObservedRunningTime="2026-01-26 12:46:39.769213603 +0000 UTC m=+176.702581215" Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.770323 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-7fzwr" podStartSLOduration=152.770312191 podStartE2EDuration="2m32.770312191s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:39.744107321 +0000 UTC m=+176.677474953" watchObservedRunningTime="2026-01-26 12:46:39.770312191 +0000 UTC m=+176.703679803" Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.779628 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.866353 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 12:46:39 crc kubenswrapper[4844]: I0126 12:46:39.970964 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.197728 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dwwm9"] Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.277799 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gxnj7" event={"ID":"c69496f6-7f67-4cca-9c9f-420e5567b165","Type":"ContainerStarted","Data":"f69c6fbaf36cd11260b7200c5022031cf73db7ed78d45140e4226272a7806c85"} Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.280777 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" event={"ID":"8cccdbda-6833-4c8f-b709-ab1f617e2153","Type":"ContainerStarted","Data":"4deb2ad9ec054dfbf731865a910a54ba74c364a402a695b1d5f6d6ea693dcd05"} Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.283161 4844 generic.go:334] "Generic (PLEG): container finished" podID="f14c711e-6ba8-4e74-99e5-b106b5caca49" containerID="b4175a87a100aa32283ebe237fe04f6880b57ccd013ac80c26062fe717d52e5d" exitCode=0 Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.283288 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f14c711e-6ba8-4e74-99e5-b106b5caca49","Type":"ContainerDied","Data":"b4175a87a100aa32283ebe237fe04f6880b57ccd013ac80c26062fe717d52e5d"} Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.286060 4844 generic.go:334] "Generic (PLEG): container finished" podID="d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75" containerID="92ff6673546c617a00e027e3dbfa11ea41ab86dbf13deeef94cbfb006bde086e" exitCode=0 Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.286155 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75","Type":"ContainerDied","Data":"92ff6673546c617a00e027e3dbfa11ea41ab86dbf13deeef94cbfb006bde086e"} Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.288703 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" event={"ID":"e17e004d-fb45-4c4f-896f-6f650a0f7379","Type":"ContainerStarted","Data":"d55b4cef7e498e926d6cda39a59add51c0022e3a128d03e6436baf21399b85e2"} Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.302688 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-gxnj7" podStartSLOduration=153.302666211 podStartE2EDuration="2m33.302666211s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:40.295639644 +0000 UTC m=+177.229007266" watchObservedRunningTime="2026-01-26 12:46:40.302666211 +0000 UTC m=+177.236033813" Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.478141 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl" Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.593719 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b95a697-eeb9-444d-83ed-3484a41f5dd1-secret-volume\") pod \"0b95a697-eeb9-444d-83ed-3484a41f5dd1\" (UID: \"0b95a697-eeb9-444d-83ed-3484a41f5dd1\") " Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.593817 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k7dt\" (UniqueName: \"kubernetes.io/projected/0b95a697-eeb9-444d-83ed-3484a41f5dd1-kube-api-access-5k7dt\") pod \"0b95a697-eeb9-444d-83ed-3484a41f5dd1\" (UID: \"0b95a697-eeb9-444d-83ed-3484a41f5dd1\") " Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.593863 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b95a697-eeb9-444d-83ed-3484a41f5dd1-config-volume\") pod \"0b95a697-eeb9-444d-83ed-3484a41f5dd1\" (UID: \"0b95a697-eeb9-444d-83ed-3484a41f5dd1\") " Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.594284 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b95a697-eeb9-444d-83ed-3484a41f5dd1-config-volume" (OuterVolumeSpecName: "config-volume") pod "0b95a697-eeb9-444d-83ed-3484a41f5dd1" (UID: "0b95a697-eeb9-444d-83ed-3484a41f5dd1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.600690 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b95a697-eeb9-444d-83ed-3484a41f5dd1-kube-api-access-5k7dt" (OuterVolumeSpecName: "kube-api-access-5k7dt") pod "0b95a697-eeb9-444d-83ed-3484a41f5dd1" (UID: "0b95a697-eeb9-444d-83ed-3484a41f5dd1"). InnerVolumeSpecName "kube-api-access-5k7dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.600696 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b95a697-eeb9-444d-83ed-3484a41f5dd1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0b95a697-eeb9-444d-83ed-3484a41f5dd1" (UID: "0b95a697-eeb9-444d-83ed-3484a41f5dd1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.682718 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:46:40 crc kubenswrapper[4844]: [-]has-synced failed: reason withheld Jan 26 12:46:40 crc kubenswrapper[4844]: [+]process-running ok Jan 26 12:46:40 crc kubenswrapper[4844]: healthz check failed Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.682769 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.696738 4844 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b95a697-eeb9-444d-83ed-3484a41f5dd1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.696826 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k7dt\" (UniqueName: \"kubernetes.io/projected/0b95a697-eeb9-444d-83ed-3484a41f5dd1-kube-api-access-5k7dt\") on node \"crc\" DevicePath \"\"" Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.696840 4844 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b95a697-eeb9-444d-83ed-3484a41f5dd1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.705946 4844 patch_prober.go:28] interesting pod/console-f9d7485db-vhsn2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 12:46:40 crc kubenswrapper[4844]: I0126 12:46:40.705999 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vhsn2" podUID="8269d7d3-678d-44d5-885e-c5716e8024d8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.298958 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" event={"ID":"8cccdbda-6833-4c8f-b709-ab1f617e2153","Type":"ContainerStarted","Data":"f1eef5187a41a68361d8eccdbbc38c7167b36d612f11461462ca2b320d6bdbbe"} Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.302766 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" event={"ID":"e17e004d-fb45-4c4f-896f-6f650a0f7379","Type":"ContainerStarted","Data":"fde35df5fc2ed9d745bb4b922f6db22da8295b8ae1cab805f9aaa3d69cba6f1a"} Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.304402 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl" event={"ID":"0b95a697-eeb9-444d-83ed-3484a41f5dd1","Type":"ContainerDied","Data":"82235de4c874a39dd19ccd9cde8d593c7a4f516ec01cf5ad69779e2b1422f365"} Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.304438 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82235de4c874a39dd19ccd9cde8d593c7a4f516ec01cf5ad69779e2b1422f365" Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.304442 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl" Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.327897 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.354909 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-5rkhb" Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.384107 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-b6r5v" podStartSLOduration=23.384089182 podStartE2EDuration="23.384089182s" podCreationTimestamp="2026-01-26 12:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:41.321618669 +0000 UTC m=+178.254986311" watchObservedRunningTime="2026-01-26 12:46:41.384089182 +0000 UTC m=+178.317456794" Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.662227 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.670267 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.683052 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:46:41 crc kubenswrapper[4844]: [-]has-synced failed: reason withheld Jan 26 12:46:41 crc kubenswrapper[4844]: [+]process-running ok Jan 26 12:46:41 crc kubenswrapper[4844]: healthz check failed Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.683124 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.699330 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.711012 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75-kubelet-dir\") pod \"d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75\" (UID: \"d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75\") " Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.711096 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75-kube-api-access\") pod \"d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75\" (UID: \"d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75\") " Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.711154 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75" (UID: "d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.711166 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f14c711e-6ba8-4e74-99e5-b106b5caca49-kubelet-dir\") pod \"f14c711e-6ba8-4e74-99e5-b106b5caca49\" (UID: \"f14c711e-6ba8-4e74-99e5-b106b5caca49\") " Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.711205 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f14c711e-6ba8-4e74-99e5-b106b5caca49-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f14c711e-6ba8-4e74-99e5-b106b5caca49" (UID: "f14c711e-6ba8-4e74-99e5-b106b5caca49"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.711303 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f14c711e-6ba8-4e74-99e5-b106b5caca49-kube-api-access\") pod \"f14c711e-6ba8-4e74-99e5-b106b5caca49\" (UID: \"f14c711e-6ba8-4e74-99e5-b106b5caca49\") " Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.711583 4844 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.711639 4844 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f14c711e-6ba8-4e74-99e5-b106b5caca49-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.716195 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f14c711e-6ba8-4e74-99e5-b106b5caca49-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f14c711e-6ba8-4e74-99e5-b106b5caca49" (UID: "f14c711e-6ba8-4e74-99e5-b106b5caca49"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.728676 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75" (UID: "d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.812828 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 12:46:41 crc kubenswrapper[4844]: I0126 12:46:41.813269 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f14c711e-6ba8-4e74-99e5-b106b5caca49-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 12:46:42 crc kubenswrapper[4844]: I0126 12:46:42.311169 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75","Type":"ContainerDied","Data":"29394b6af1d99198f4e2615d72ead3bc5f127650b8a110261aef78d68c474a8f"} Jan 26 12:46:42 crc kubenswrapper[4844]: I0126 12:46:42.311209 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29394b6af1d99198f4e2615d72ead3bc5f127650b8a110261aef78d68c474a8f" Jan 26 12:46:42 crc kubenswrapper[4844]: I0126 12:46:42.311234 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 12:46:42 crc kubenswrapper[4844]: I0126 12:46:42.312531 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f14c711e-6ba8-4e74-99e5-b106b5caca49","Type":"ContainerDied","Data":"53b4ac8de8d4f765b6b3bf68e0d9bf817f18cf68a89147a8f48d6b55ca255b49"} Jan 26 12:46:42 crc kubenswrapper[4844]: I0126 12:46:42.312556 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53b4ac8de8d4f765b6b3bf68e0d9bf817f18cf68a89147a8f48d6b55ca255b49" Jan 26 12:46:42 crc kubenswrapper[4844]: I0126 12:46:42.312623 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 12:46:42 crc kubenswrapper[4844]: I0126 12:46:42.312962 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:46:42 crc kubenswrapper[4844]: I0126 12:46:42.332080 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" podStartSLOduration=155.332063464 podStartE2EDuration="2m35.332063464s" podCreationTimestamp="2026-01-26 12:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:46:42.331093299 +0000 UTC m=+179.264460911" watchObservedRunningTime="2026-01-26 12:46:42.332063464 +0000 UTC m=+179.265431076" Jan 26 12:46:42 crc kubenswrapper[4844]: I0126 12:46:42.683664 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:46:42 crc kubenswrapper[4844]: [-]has-synced failed: reason withheld Jan 26 12:46:42 crc kubenswrapper[4844]: [+]process-running ok Jan 26 12:46:42 crc kubenswrapper[4844]: healthz check failed Jan 26 12:46:42 crc kubenswrapper[4844]: I0126 12:46:42.683729 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:46:43 crc kubenswrapper[4844]: I0126 12:46:43.681687 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:46:43 crc kubenswrapper[4844]: [-]has-synced failed: reason withheld Jan 26 12:46:43 crc kubenswrapper[4844]: [+]process-running ok Jan 26 12:46:43 crc kubenswrapper[4844]: healthz check failed Jan 26 12:46:43 crc kubenswrapper[4844]: I0126 12:46:43.682127 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:46:44 crc kubenswrapper[4844]: I0126 12:46:44.681907 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:46:44 crc kubenswrapper[4844]: [-]has-synced failed: reason withheld Jan 26 12:46:44 crc kubenswrapper[4844]: [+]process-running ok Jan 26 12:46:44 crc kubenswrapper[4844]: healthz check failed Jan 26 12:46:44 crc kubenswrapper[4844]: I0126 12:46:44.682641 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:46:45 crc kubenswrapper[4844]: I0126 12:46:45.682111 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:46:45 crc kubenswrapper[4844]: [-]has-synced failed: reason withheld Jan 26 12:46:45 crc kubenswrapper[4844]: [+]process-running ok Jan 26 12:46:45 crc kubenswrapper[4844]: healthz check failed Jan 26 12:46:45 crc kubenswrapper[4844]: I0126 12:46:45.682232 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:46:46 crc kubenswrapper[4844]: I0126 12:46:46.681722 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:46:46 crc kubenswrapper[4844]: [-]has-synced failed: reason withheld Jan 26 12:46:46 crc kubenswrapper[4844]: [+]process-running ok Jan 26 12:46:46 crc kubenswrapper[4844]: healthz check failed Jan 26 12:46:46 crc kubenswrapper[4844]: I0126 12:46:46.681788 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:46:47 crc kubenswrapper[4844]: I0126 12:46:47.680612 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 12:46:47 crc kubenswrapper[4844]: [-]has-synced failed: reason withheld Jan 26 12:46:47 crc kubenswrapper[4844]: [+]process-running ok Jan 26 12:46:47 crc kubenswrapper[4844]: healthz check failed Jan 26 12:46:47 crc kubenswrapper[4844]: I0126 12:46:47.680713 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 12:46:48 crc kubenswrapper[4844]: I0126 12:46:48.683529 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:48 crc kubenswrapper[4844]: I0126 12:46:48.688254 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-9pkgp" Jan 26 12:46:49 crc kubenswrapper[4844]: I0126 12:46:49.705561 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 12:46:50 crc kubenswrapper[4844]: I0126 12:46:50.711543 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:50 crc kubenswrapper[4844]: I0126 12:46:50.716376 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 12:46:59 crc kubenswrapper[4844]: I0126 12:46:59.978779 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:47:01 crc kubenswrapper[4844]: I0126 12:47:01.345114 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pmxvg" Jan 26 12:47:06 crc kubenswrapper[4844]: I0126 12:47:06.364816 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:47:06 crc kubenswrapper[4844]: I0126 12:47:06.365325 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:47:11 crc kubenswrapper[4844]: I0126 12:47:11.638479 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 12:47:11 crc kubenswrapper[4844]: E0126 12:47:11.639793 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b95a697-eeb9-444d-83ed-3484a41f5dd1" containerName="collect-profiles" Jan 26 12:47:11 crc kubenswrapper[4844]: I0126 12:47:11.639912 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b95a697-eeb9-444d-83ed-3484a41f5dd1" containerName="collect-profiles" Jan 26 12:47:11 crc kubenswrapper[4844]: E0126 12:47:11.640001 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75" containerName="pruner" Jan 26 12:47:11 crc kubenswrapper[4844]: I0126 12:47:11.640726 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75" containerName="pruner" Jan 26 12:47:11 crc kubenswrapper[4844]: E0126 12:47:11.640828 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f14c711e-6ba8-4e74-99e5-b106b5caca49" containerName="pruner" Jan 26 12:47:11 crc kubenswrapper[4844]: I0126 12:47:11.640912 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="f14c711e-6ba8-4e74-99e5-b106b5caca49" containerName="pruner" Jan 26 12:47:11 crc kubenswrapper[4844]: I0126 12:47:11.641119 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b9d2bf-7a4b-49bb-8ed5-052e3c9c1f75" containerName="pruner" Jan 26 12:47:11 crc kubenswrapper[4844]: I0126 12:47:11.641313 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b95a697-eeb9-444d-83ed-3484a41f5dd1" containerName="collect-profiles" Jan 26 12:47:11 crc kubenswrapper[4844]: I0126 12:47:11.641432 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="f14c711e-6ba8-4e74-99e5-b106b5caca49" containerName="pruner" Jan 26 12:47:11 crc kubenswrapper[4844]: I0126 12:47:11.642060 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:47:11 crc kubenswrapper[4844]: I0126 12:47:11.644927 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 12:47:11 crc kubenswrapper[4844]: I0126 12:47:11.645216 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 12:47:11 crc kubenswrapper[4844]: I0126 12:47:11.658420 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 12:47:11 crc kubenswrapper[4844]: I0126 12:47:11.820110 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c47ef90d-e345-4eee-ba48-ed2e46f12668-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c47ef90d-e345-4eee-ba48-ed2e46f12668\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:47:11 crc kubenswrapper[4844]: I0126 12:47:11.820200 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c47ef90d-e345-4eee-ba48-ed2e46f12668-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c47ef90d-e345-4eee-ba48-ed2e46f12668\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:47:11 crc kubenswrapper[4844]: I0126 12:47:11.921439 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c47ef90d-e345-4eee-ba48-ed2e46f12668-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c47ef90d-e345-4eee-ba48-ed2e46f12668\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:47:11 crc kubenswrapper[4844]: I0126 12:47:11.922056 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c47ef90d-e345-4eee-ba48-ed2e46f12668-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c47ef90d-e345-4eee-ba48-ed2e46f12668\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:47:11 crc kubenswrapper[4844]: I0126 12:47:11.921627 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c47ef90d-e345-4eee-ba48-ed2e46f12668-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c47ef90d-e345-4eee-ba48-ed2e46f12668\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:47:11 crc kubenswrapper[4844]: I0126 12:47:11.949485 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c47ef90d-e345-4eee-ba48-ed2e46f12668-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c47ef90d-e345-4eee-ba48-ed2e46f12668\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:47:12 crc kubenswrapper[4844]: I0126 12:47:12.025675 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:47:17 crc kubenswrapper[4844]: I0126 12:47:17.254676 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 12:47:17 crc kubenswrapper[4844]: I0126 12:47:17.256096 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:47:17 crc kubenswrapper[4844]: I0126 12:47:17.261042 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 12:47:17 crc kubenswrapper[4844]: I0126 12:47:17.300113 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/940c9b8a-a28e-4fb7-be00-c2f6f4bba416-var-lock\") pod \"installer-9-crc\" (UID: \"940c9b8a-a28e-4fb7-be00-c2f6f4bba416\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:47:17 crc kubenswrapper[4844]: I0126 12:47:17.300246 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/940c9b8a-a28e-4fb7-be00-c2f6f4bba416-kube-api-access\") pod \"installer-9-crc\" (UID: \"940c9b8a-a28e-4fb7-be00-c2f6f4bba416\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:47:17 crc kubenswrapper[4844]: I0126 12:47:17.300328 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/940c9b8a-a28e-4fb7-be00-c2f6f4bba416-kubelet-dir\") pod \"installer-9-crc\" (UID: \"940c9b8a-a28e-4fb7-be00-c2f6f4bba416\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:47:17 crc kubenswrapper[4844]: I0126 12:47:17.403106 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/940c9b8a-a28e-4fb7-be00-c2f6f4bba416-var-lock\") pod \"installer-9-crc\" (UID: \"940c9b8a-a28e-4fb7-be00-c2f6f4bba416\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:47:17 crc kubenswrapper[4844]: I0126 12:47:17.403351 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/940c9b8a-a28e-4fb7-be00-c2f6f4bba416-var-lock\") pod \"installer-9-crc\" (UID: \"940c9b8a-a28e-4fb7-be00-c2f6f4bba416\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:47:17 crc kubenswrapper[4844]: I0126 12:47:17.403842 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/940c9b8a-a28e-4fb7-be00-c2f6f4bba416-kube-api-access\") pod \"installer-9-crc\" (UID: \"940c9b8a-a28e-4fb7-be00-c2f6f4bba416\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:47:17 crc kubenswrapper[4844]: I0126 12:47:17.404018 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/940c9b8a-a28e-4fb7-be00-c2f6f4bba416-kubelet-dir\") pod \"installer-9-crc\" (UID: \"940c9b8a-a28e-4fb7-be00-c2f6f4bba416\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:47:17 crc kubenswrapper[4844]: I0126 12:47:17.404210 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/940c9b8a-a28e-4fb7-be00-c2f6f4bba416-kubelet-dir\") pod \"installer-9-crc\" (UID: \"940c9b8a-a28e-4fb7-be00-c2f6f4bba416\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:47:17 crc kubenswrapper[4844]: I0126 12:47:17.436830 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/940c9b8a-a28e-4fb7-be00-c2f6f4bba416-kube-api-access\") pod \"installer-9-crc\" (UID: \"940c9b8a-a28e-4fb7-be00-c2f6f4bba416\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:47:17 crc kubenswrapper[4844]: I0126 12:47:17.608071 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:47:19 crc kubenswrapper[4844]: E0126 12:47:19.221013 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 12:47:19 crc kubenswrapper[4844]: E0126 12:47:19.221623 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5km9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8hnm5_openshift-marketplace(1f204088-0679-4c31-bd2b-848fc4f93b21): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 12:47:19 crc kubenswrapper[4844]: E0126 12:47:19.223714 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8hnm5" podUID="1f204088-0679-4c31-bd2b-848fc4f93b21" Jan 26 12:47:22 crc kubenswrapper[4844]: E0126 12:47:22.597505 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8hnm5" podUID="1f204088-0679-4c31-bd2b-848fc4f93b21" Jan 26 12:47:22 crc kubenswrapper[4844]: E0126 12:47:22.609571 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:94d1bfc77428a945334e81bab025286e1fb0c1323b3aa1395b0c2f8e42153686: Get \"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:94d1bfc77428a945334e81bab025286e1fb0c1323b3aa1395b0c2f8e42153686\": context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 12:47:22 crc kubenswrapper[4844]: E0126 12:47:22.609795 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lc4l4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8zmdx_openshift-marketplace(354b9578-ac43-4a15-831f-d6ae0bc5c449): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:94d1bfc77428a945334e81bab025286e1fb0c1323b3aa1395b0c2f8e42153686: Get \"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:94d1bfc77428a945334e81bab025286e1fb0c1323b3aa1395b0c2f8e42153686\": context canceled" logger="UnhandledError" Jan 26 12:47:22 crc kubenswrapper[4844]: E0126 12:47:22.611826 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:94d1bfc77428a945334e81bab025286e1fb0c1323b3aa1395b0c2f8e42153686: Get \\\"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:94d1bfc77428a945334e81bab025286e1fb0c1323b3aa1395b0c2f8e42153686\\\": context canceled\"" pod="openshift-marketplace/redhat-marketplace-8zmdx" podUID="354b9578-ac43-4a15-831f-d6ae0bc5c449" Jan 26 12:47:22 crc kubenswrapper[4844]: E0126 12:47:22.676831 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 12:47:22 crc kubenswrapper[4844]: E0126 12:47:22.677392 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66f2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bnlhz_openshift-marketplace(8ddfeacb-de87-47d6-913e-6c2333a7df93): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 12:47:22 crc kubenswrapper[4844]: E0126 12:47:22.679662 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bnlhz" podUID="8ddfeacb-de87-47d6-913e-6c2333a7df93" Jan 26 12:47:36 crc kubenswrapper[4844]: I0126 12:47:36.364493 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:47:36 crc kubenswrapper[4844]: I0126 12:47:36.365094 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:47:36 crc kubenswrapper[4844]: I0126 12:47:36.365142 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:47:36 crc kubenswrapper[4844]: I0126 12:47:36.365691 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 12:47:36 crc kubenswrapper[4844]: I0126 12:47:36.365788 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2" gracePeriod=600 Jan 26 12:47:37 crc kubenswrapper[4844]: E0126 12:47:37.031133 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage426561839/2\": happened during read: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 12:47:37 crc kubenswrapper[4844]: E0126 12:47:37.032137 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8hw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dn4m8_openshift-marketplace(2a0ca290-d48e-4c46-8c36-1e414126c42f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage426561839/2\": happened during read: context canceled" logger="UnhandledError" Jan 26 12:47:37 crc kubenswrapper[4844]: E0126 12:47:37.033338 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage426561839/2\\\": happened during read: context canceled\"" pod="openshift-marketplace/redhat-operators-dn4m8" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" Jan 26 12:47:37 crc kubenswrapper[4844]: E0126 12:47:37.127054 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage193036518/2\": happened during read: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 12:47:37 crc kubenswrapper[4844]: E0126 12:47:37.127452 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqvbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-982kx_openshift-marketplace(1b7b1cea-f94c-4750-8db8-18d9b7f9fb70): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage193036518/2\": happened during read: context canceled" logger="UnhandledError" Jan 26 12:47:37 crc kubenswrapper[4844]: E0126 12:47:37.128762 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage193036518/2\\\": happened during read: context canceled\"" pod="openshift-marketplace/certified-operators-982kx" podUID="1b7b1cea-f94c-4750-8db8-18d9b7f9fb70" Jan 26 12:47:37 crc kubenswrapper[4844]: I0126 12:47:37.676851 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2" exitCode=0 Jan 26 12:47:37 crc kubenswrapper[4844]: I0126 12:47:37.676924 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2"} Jan 26 12:47:42 crc kubenswrapper[4844]: E0126 12:47:42.109318 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 12:47:42 crc kubenswrapper[4844]: E0126 12:47:42.109880 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hljrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-lhjls_openshift-marketplace(a37a9c59-7c20-4326-b280-9dbd2d633e0b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 12:47:42 crc kubenswrapper[4844]: E0126 12:47:42.111149 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-lhjls" podUID="a37a9c59-7c20-4326-b280-9dbd2d633e0b" Jan 26 12:47:45 crc kubenswrapper[4844]: E0126 12:47:45.026199 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 12:47:45 crc kubenswrapper[4844]: E0126 12:47:45.026592 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7w7zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8hdq2_openshift-marketplace(d60e5f01-76f1-47a0-8a7d-390457ce1b47): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 12:47:45 crc kubenswrapper[4844]: E0126 12:47:45.027866 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8hdq2" podUID="d60e5f01-76f1-47a0-8a7d-390457ce1b47" Jan 26 12:47:45 crc kubenswrapper[4844]: E0126 12:47:45.116062 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dn4m8" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" Jan 26 12:47:45 crc kubenswrapper[4844]: E0126 12:47:45.116064 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-lhjls" podUID="a37a9c59-7c20-4326-b280-9dbd2d633e0b" Jan 26 12:47:45 crc kubenswrapper[4844]: E0126 12:47:45.293506 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 12:47:45 crc kubenswrapper[4844]: E0126 12:47:45.293965 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t96w4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-djrt9_openshift-marketplace(637c7ba4-2cae-4d56-860f-ab82722169a2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 12:47:45 crc kubenswrapper[4844]: E0126 12:47:45.295638 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-djrt9" podUID="637c7ba4-2cae-4d56-860f-ab82722169a2" Jan 26 12:47:45 crc kubenswrapper[4844]: I0126 12:47:45.361643 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 12:47:45 crc kubenswrapper[4844]: I0126 12:47:45.618455 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 12:47:46 crc kubenswrapper[4844]: E0126 12:47:46.034766 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8hdq2" podUID="d60e5f01-76f1-47a0-8a7d-390457ce1b47" Jan 26 12:47:46 crc kubenswrapper[4844]: E0126 12:47:46.034933 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-djrt9" podUID="637c7ba4-2cae-4d56-860f-ab82722169a2" Jan 26 12:47:46 crc kubenswrapper[4844]: I0126 12:47:46.736159 4844 generic.go:334] "Generic (PLEG): container finished" podID="1f204088-0679-4c31-bd2b-848fc4f93b21" containerID="507efa4e6b84e32ba2b163ab197dc86f23492f40d8223ecca927c1f8294538f3" exitCode=0 Jan 26 12:47:46 crc kubenswrapper[4844]: I0126 12:47:46.736222 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hnm5" event={"ID":"1f204088-0679-4c31-bd2b-848fc4f93b21","Type":"ContainerDied","Data":"507efa4e6b84e32ba2b163ab197dc86f23492f40d8223ecca927c1f8294538f3"} Jan 26 12:47:46 crc kubenswrapper[4844]: I0126 12:47:46.740735 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"940c9b8a-a28e-4fb7-be00-c2f6f4bba416","Type":"ContainerStarted","Data":"3b90114e3f9f4e26652be74878efc251bf062fb5c38d541a4f6389b5a2230361"} Jan 26 12:47:46 crc kubenswrapper[4844]: I0126 12:47:46.740796 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"940c9b8a-a28e-4fb7-be00-c2f6f4bba416","Type":"ContainerStarted","Data":"40d7c018f65f7784d0eb153bc1c12bd9346229adf28c39b2d7d64ee3342edc00"} Jan 26 12:47:46 crc kubenswrapper[4844]: I0126 12:47:46.742471 4844 generic.go:334] "Generic (PLEG): container finished" podID="354b9578-ac43-4a15-831f-d6ae0bc5c449" containerID="d08386c7b10c7eb1a2acd34f7b2ad164dfa02b328394246e7ebd4fcb6ffa8df7" exitCode=0 Jan 26 12:47:46 crc kubenswrapper[4844]: I0126 12:47:46.742529 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zmdx" event={"ID":"354b9578-ac43-4a15-831f-d6ae0bc5c449","Type":"ContainerDied","Data":"d08386c7b10c7eb1a2acd34f7b2ad164dfa02b328394246e7ebd4fcb6ffa8df7"} Jan 26 12:47:46 crc kubenswrapper[4844]: I0126 12:47:46.747475 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"259eaafa3e05165d5d7e0a880f0cf0745986b838a34c0b0ee82a10c9bd689fed"} Jan 26 12:47:46 crc kubenswrapper[4844]: I0126 12:47:46.762051 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c47ef90d-e345-4eee-ba48-ed2e46f12668","Type":"ContainerStarted","Data":"9841ea5ae1c1d9875d8f529e8e9bd99ad28a48ea99c11674f483b411da1ed6be"} Jan 26 12:47:46 crc kubenswrapper[4844]: I0126 12:47:46.762130 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c47ef90d-e345-4eee-ba48-ed2e46f12668","Type":"ContainerStarted","Data":"4f20d1efc3faee12077d9f1936a618cf78a158775b0955bec43a224804e4b4fe"} Jan 26 12:47:46 crc kubenswrapper[4844]: I0126 12:47:46.769845 4844 generic.go:334] "Generic (PLEG): container finished" podID="8ddfeacb-de87-47d6-913e-6c2333a7df93" containerID="e1b9f9fe590059a479719c9d04f0c25c441d4432edcfd4db0115e2bf17679d94" exitCode=0 Jan 26 12:47:46 crc kubenswrapper[4844]: I0126 12:47:46.769930 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnlhz" event={"ID":"8ddfeacb-de87-47d6-913e-6c2333a7df93","Type":"ContainerDied","Data":"e1b9f9fe590059a479719c9d04f0c25c441d4432edcfd4db0115e2bf17679d94"} Jan 26 12:47:46 crc kubenswrapper[4844]: I0126 12:47:46.784592 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=29.784557096 podStartE2EDuration="29.784557096s" podCreationTimestamp="2026-01-26 12:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:47:46.78435765 +0000 UTC m=+243.717725262" watchObservedRunningTime="2026-01-26 12:47:46.784557096 +0000 UTC m=+243.717924738" Jan 26 12:47:46 crc kubenswrapper[4844]: I0126 12:47:46.849843 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=35.849798548 podStartE2EDuration="35.849798548s" podCreationTimestamp="2026-01-26 12:47:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:47:46.848936016 +0000 UTC m=+243.782303648" watchObservedRunningTime="2026-01-26 12:47:46.849798548 +0000 UTC m=+243.783166160" Jan 26 12:47:47 crc kubenswrapper[4844]: I0126 12:47:47.777847 4844 generic.go:334] "Generic (PLEG): container finished" podID="c47ef90d-e345-4eee-ba48-ed2e46f12668" containerID="9841ea5ae1c1d9875d8f529e8e9bd99ad28a48ea99c11674f483b411da1ed6be" exitCode=0 Jan 26 12:47:47 crc kubenswrapper[4844]: I0126 12:47:47.777945 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c47ef90d-e345-4eee-ba48-ed2e46f12668","Type":"ContainerDied","Data":"9841ea5ae1c1d9875d8f529e8e9bd99ad28a48ea99c11674f483b411da1ed6be"} Jan 26 12:47:47 crc kubenswrapper[4844]: I0126 12:47:47.781513 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnlhz" event={"ID":"8ddfeacb-de87-47d6-913e-6c2333a7df93","Type":"ContainerStarted","Data":"b9ab3a1cccafea6d60290bb7a2962bd94bb13e3c6bfb87ba6b432556d3d15082"} Jan 26 12:47:47 crc kubenswrapper[4844]: I0126 12:47:47.785176 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hnm5" event={"ID":"1f204088-0679-4c31-bd2b-848fc4f93b21","Type":"ContainerStarted","Data":"52b24656eb293c56278e9835f8abc0ba0024bfe3e7c2b17e9337708f0558813f"} Jan 26 12:47:47 crc kubenswrapper[4844]: I0126 12:47:47.787718 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zmdx" event={"ID":"354b9578-ac43-4a15-831f-d6ae0bc5c449","Type":"ContainerStarted","Data":"06fa0f84caaa1cb9cc5c722c6ad06276ccda3c466f8c91c5bf6706fa489eee20"} Jan 26 12:47:47 crc kubenswrapper[4844]: I0126 12:47:47.832514 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bnlhz" podStartSLOduration=9.78830911 podStartE2EDuration="1m17.832496089s" podCreationTimestamp="2026-01-26 12:46:30 +0000 UTC" firstStartedPulling="2026-01-26 12:46:39.190852735 +0000 UTC m=+176.124220347" lastFinishedPulling="2026-01-26 12:47:47.235039714 +0000 UTC m=+244.168407326" observedRunningTime="2026-01-26 12:47:47.828073526 +0000 UTC m=+244.761441148" watchObservedRunningTime="2026-01-26 12:47:47.832496089 +0000 UTC m=+244.765863701" Jan 26 12:47:47 crc kubenswrapper[4844]: I0126 12:47:47.851542 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8zmdx" podStartSLOduration=7.971607605 podStartE2EDuration="1m15.851524374s" podCreationTimestamp="2026-01-26 12:46:32 +0000 UTC" firstStartedPulling="2026-01-26 12:46:39.274656195 +0000 UTC m=+176.208023807" lastFinishedPulling="2026-01-26 12:47:47.154572964 +0000 UTC m=+244.087940576" observedRunningTime="2026-01-26 12:47:47.850227291 +0000 UTC m=+244.783594903" watchObservedRunningTime="2026-01-26 12:47:47.851524374 +0000 UTC m=+244.784891986" Jan 26 12:47:47 crc kubenswrapper[4844]: I0126 12:47:47.872708 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8hnm5" podStartSLOduration=9.890181398 podStartE2EDuration="1m17.872692313s" podCreationTimestamp="2026-01-26 12:46:30 +0000 UTC" firstStartedPulling="2026-01-26 12:46:39.213233108 +0000 UTC m=+176.146600720" lastFinishedPulling="2026-01-26 12:47:47.195744023 +0000 UTC m=+244.129111635" observedRunningTime="2026-01-26 12:47:47.869726868 +0000 UTC m=+244.803094480" watchObservedRunningTime="2026-01-26 12:47:47.872692313 +0000 UTC m=+244.806059915" Jan 26 12:47:49 crc kubenswrapper[4844]: I0126 12:47:49.055887 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:47:49 crc kubenswrapper[4844]: I0126 12:47:49.158734 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c47ef90d-e345-4eee-ba48-ed2e46f12668-kube-api-access\") pod \"c47ef90d-e345-4eee-ba48-ed2e46f12668\" (UID: \"c47ef90d-e345-4eee-ba48-ed2e46f12668\") " Jan 26 12:47:49 crc kubenswrapper[4844]: I0126 12:47:49.158892 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c47ef90d-e345-4eee-ba48-ed2e46f12668-kubelet-dir\") pod \"c47ef90d-e345-4eee-ba48-ed2e46f12668\" (UID: \"c47ef90d-e345-4eee-ba48-ed2e46f12668\") " Jan 26 12:47:49 crc kubenswrapper[4844]: I0126 12:47:49.159212 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c47ef90d-e345-4eee-ba48-ed2e46f12668-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c47ef90d-e345-4eee-ba48-ed2e46f12668" (UID: "c47ef90d-e345-4eee-ba48-ed2e46f12668"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:47:49 crc kubenswrapper[4844]: I0126 12:47:49.168830 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c47ef90d-e345-4eee-ba48-ed2e46f12668-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c47ef90d-e345-4eee-ba48-ed2e46f12668" (UID: "c47ef90d-e345-4eee-ba48-ed2e46f12668"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:47:49 crc kubenswrapper[4844]: I0126 12:47:49.260010 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c47ef90d-e345-4eee-ba48-ed2e46f12668-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 12:47:49 crc kubenswrapper[4844]: I0126 12:47:49.260055 4844 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c47ef90d-e345-4eee-ba48-ed2e46f12668-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:47:49 crc kubenswrapper[4844]: I0126 12:47:49.802085 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c47ef90d-e345-4eee-ba48-ed2e46f12668","Type":"ContainerDied","Data":"4f20d1efc3faee12077d9f1936a618cf78a158775b0955bec43a224804e4b4fe"} Jan 26 12:47:49 crc kubenswrapper[4844]: I0126 12:47:49.802143 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f20d1efc3faee12077d9f1936a618cf78a158775b0955bec43a224804e4b4fe" Jan 26 12:47:49 crc kubenswrapper[4844]: I0126 12:47:49.802145 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 12:47:50 crc kubenswrapper[4844]: I0126 12:47:50.478445 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8hnm5" Jan 26 12:47:50 crc kubenswrapper[4844]: I0126 12:47:50.479018 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8hnm5" Jan 26 12:47:50 crc kubenswrapper[4844]: I0126 12:47:50.548782 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8hnm5" Jan 26 12:47:50 crc kubenswrapper[4844]: I0126 12:47:50.643754 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bnlhz" Jan 26 12:47:50 crc kubenswrapper[4844]: I0126 12:47:50.643829 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bnlhz" Jan 26 12:47:50 crc kubenswrapper[4844]: I0126 12:47:50.686017 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bnlhz" Jan 26 12:47:50 crc kubenswrapper[4844]: I0126 12:47:50.809709 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-982kx" event={"ID":"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70","Type":"ContainerStarted","Data":"e6a7c8d051fb7d049c17bd3c8350d85fbfe8095a716f971036092479e889a943"} Jan 26 12:47:51 crc kubenswrapper[4844]: I0126 12:47:51.825839 4844 generic.go:334] "Generic (PLEG): container finished" podID="1b7b1cea-f94c-4750-8db8-18d9b7f9fb70" containerID="e6a7c8d051fb7d049c17bd3c8350d85fbfe8095a716f971036092479e889a943" exitCode=0 Jan 26 12:47:51 crc kubenswrapper[4844]: I0126 12:47:51.825949 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-982kx" event={"ID":"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70","Type":"ContainerDied","Data":"e6a7c8d051fb7d049c17bd3c8350d85fbfe8095a716f971036092479e889a943"} Jan 26 12:47:52 crc kubenswrapper[4844]: I0126 12:47:52.660165 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8zmdx" Jan 26 12:47:52 crc kubenswrapper[4844]: I0126 12:47:52.660210 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8zmdx" Jan 26 12:47:52 crc kubenswrapper[4844]: I0126 12:47:52.716108 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8zmdx" Jan 26 12:47:52 crc kubenswrapper[4844]: I0126 12:47:52.880980 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8zmdx" Jan 26 12:47:54 crc kubenswrapper[4844]: I0126 12:47:54.851896 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-982kx" event={"ID":"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70","Type":"ContainerStarted","Data":"75837018c1ec0a5f226f3ed48de9b1c248d7aecb4fdbaec9bf992ef3130dcd21"} Jan 26 12:47:54 crc kubenswrapper[4844]: I0126 12:47:54.887822 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-982kx" podStartSLOduration=10.956460288 podStartE2EDuration="1m25.887805121s" podCreationTimestamp="2026-01-26 12:46:29 +0000 UTC" firstStartedPulling="2026-01-26 12:46:39.272811748 +0000 UTC m=+176.206179360" lastFinishedPulling="2026-01-26 12:47:54.204156581 +0000 UTC m=+251.137524193" observedRunningTime="2026-01-26 12:47:54.886503008 +0000 UTC m=+251.819870650" watchObservedRunningTime="2026-01-26 12:47:54.887805121 +0000 UTC m=+251.821172733" Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.174539 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zmdx"] Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.175074 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8zmdx" podUID="354b9578-ac43-4a15-831f-d6ae0bc5c449" containerName="registry-server" containerID="cri-o://06fa0f84caaa1cb9cc5c722c6ad06276ccda3c466f8c91c5bf6706fa489eee20" gracePeriod=2 Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.655826 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zmdx" Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.858628 4844 generic.go:334] "Generic (PLEG): container finished" podID="354b9578-ac43-4a15-831f-d6ae0bc5c449" containerID="06fa0f84caaa1cb9cc5c722c6ad06276ccda3c466f8c91c5bf6706fa489eee20" exitCode=0 Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.858675 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zmdx" event={"ID":"354b9578-ac43-4a15-831f-d6ae0bc5c449","Type":"ContainerDied","Data":"06fa0f84caaa1cb9cc5c722c6ad06276ccda3c466f8c91c5bf6706fa489eee20"} Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.858711 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zmdx" event={"ID":"354b9578-ac43-4a15-831f-d6ae0bc5c449","Type":"ContainerDied","Data":"41875b1ca388b4eea68433a1f3b4f41fd22f1f345e6270fd0a9053edaf170c42"} Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.858735 4844 scope.go:117] "RemoveContainer" containerID="06fa0f84caaa1cb9cc5c722c6ad06276ccda3c466f8c91c5bf6706fa489eee20" Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.858872 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zmdx" Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.860886 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354b9578-ac43-4a15-831f-d6ae0bc5c449-utilities\") pod \"354b9578-ac43-4a15-831f-d6ae0bc5c449\" (UID: \"354b9578-ac43-4a15-831f-d6ae0bc5c449\") " Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.860920 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc4l4\" (UniqueName: \"kubernetes.io/projected/354b9578-ac43-4a15-831f-d6ae0bc5c449-kube-api-access-lc4l4\") pod \"354b9578-ac43-4a15-831f-d6ae0bc5c449\" (UID: \"354b9578-ac43-4a15-831f-d6ae0bc5c449\") " Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.860985 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354b9578-ac43-4a15-831f-d6ae0bc5c449-catalog-content\") pod \"354b9578-ac43-4a15-831f-d6ae0bc5c449\" (UID: \"354b9578-ac43-4a15-831f-d6ae0bc5c449\") " Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.863156 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/354b9578-ac43-4a15-831f-d6ae0bc5c449-utilities" (OuterVolumeSpecName: "utilities") pod "354b9578-ac43-4a15-831f-d6ae0bc5c449" (UID: "354b9578-ac43-4a15-831f-d6ae0bc5c449"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.867953 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354b9578-ac43-4a15-831f-d6ae0bc5c449-kube-api-access-lc4l4" (OuterVolumeSpecName: "kube-api-access-lc4l4") pod "354b9578-ac43-4a15-831f-d6ae0bc5c449" (UID: "354b9578-ac43-4a15-831f-d6ae0bc5c449"). InnerVolumeSpecName "kube-api-access-lc4l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.877560 4844 scope.go:117] "RemoveContainer" containerID="d08386c7b10c7eb1a2acd34f7b2ad164dfa02b328394246e7ebd4fcb6ffa8df7" Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.893570 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/354b9578-ac43-4a15-831f-d6ae0bc5c449-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "354b9578-ac43-4a15-831f-d6ae0bc5c449" (UID: "354b9578-ac43-4a15-831f-d6ae0bc5c449"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.894124 4844 scope.go:117] "RemoveContainer" containerID="ebce76073ebe8e8e5c7894d5e2235ae3f5a6c42f07370df802e160ce4920cee0" Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.915130 4844 scope.go:117] "RemoveContainer" containerID="06fa0f84caaa1cb9cc5c722c6ad06276ccda3c466f8c91c5bf6706fa489eee20" Jan 26 12:47:55 crc kubenswrapper[4844]: E0126 12:47:55.915739 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06fa0f84caaa1cb9cc5c722c6ad06276ccda3c466f8c91c5bf6706fa489eee20\": container with ID starting with 06fa0f84caaa1cb9cc5c722c6ad06276ccda3c466f8c91c5bf6706fa489eee20 not found: ID does not exist" containerID="06fa0f84caaa1cb9cc5c722c6ad06276ccda3c466f8c91c5bf6706fa489eee20" Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.915781 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06fa0f84caaa1cb9cc5c722c6ad06276ccda3c466f8c91c5bf6706fa489eee20"} err="failed to get container status \"06fa0f84caaa1cb9cc5c722c6ad06276ccda3c466f8c91c5bf6706fa489eee20\": rpc error: code = NotFound desc = could not find container \"06fa0f84caaa1cb9cc5c722c6ad06276ccda3c466f8c91c5bf6706fa489eee20\": container with ID starting with 06fa0f84caaa1cb9cc5c722c6ad06276ccda3c466f8c91c5bf6706fa489eee20 not found: ID does not exist" Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.915808 4844 scope.go:117] "RemoveContainer" containerID="d08386c7b10c7eb1a2acd34f7b2ad164dfa02b328394246e7ebd4fcb6ffa8df7" Jan 26 12:47:55 crc kubenswrapper[4844]: E0126 12:47:55.916326 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d08386c7b10c7eb1a2acd34f7b2ad164dfa02b328394246e7ebd4fcb6ffa8df7\": container with ID starting with d08386c7b10c7eb1a2acd34f7b2ad164dfa02b328394246e7ebd4fcb6ffa8df7 not found: ID does not exist" containerID="d08386c7b10c7eb1a2acd34f7b2ad164dfa02b328394246e7ebd4fcb6ffa8df7" Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.916356 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08386c7b10c7eb1a2acd34f7b2ad164dfa02b328394246e7ebd4fcb6ffa8df7"} err="failed to get container status \"d08386c7b10c7eb1a2acd34f7b2ad164dfa02b328394246e7ebd4fcb6ffa8df7\": rpc error: code = NotFound desc = could not find container \"d08386c7b10c7eb1a2acd34f7b2ad164dfa02b328394246e7ebd4fcb6ffa8df7\": container with ID starting with d08386c7b10c7eb1a2acd34f7b2ad164dfa02b328394246e7ebd4fcb6ffa8df7 not found: ID does not exist" Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.916377 4844 scope.go:117] "RemoveContainer" containerID="ebce76073ebe8e8e5c7894d5e2235ae3f5a6c42f07370df802e160ce4920cee0" Jan 26 12:47:55 crc kubenswrapper[4844]: E0126 12:47:55.916856 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebce76073ebe8e8e5c7894d5e2235ae3f5a6c42f07370df802e160ce4920cee0\": container with ID starting with ebce76073ebe8e8e5c7894d5e2235ae3f5a6c42f07370df802e160ce4920cee0 not found: ID does not exist" containerID="ebce76073ebe8e8e5c7894d5e2235ae3f5a6c42f07370df802e160ce4920cee0" Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.916884 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebce76073ebe8e8e5c7894d5e2235ae3f5a6c42f07370df802e160ce4920cee0"} err="failed to get container status \"ebce76073ebe8e8e5c7894d5e2235ae3f5a6c42f07370df802e160ce4920cee0\": rpc error: code = NotFound desc = could not find container \"ebce76073ebe8e8e5c7894d5e2235ae3f5a6c42f07370df802e160ce4920cee0\": container with ID starting with ebce76073ebe8e8e5c7894d5e2235ae3f5a6c42f07370df802e160ce4920cee0 not found: ID does not exist" Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.962462 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354b9578-ac43-4a15-831f-d6ae0bc5c449-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.962536 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354b9578-ac43-4a15-831f-d6ae0bc5c449-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:47:55 crc kubenswrapper[4844]: I0126 12:47:55.962550 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc4l4\" (UniqueName: \"kubernetes.io/projected/354b9578-ac43-4a15-831f-d6ae0bc5c449-kube-api-access-lc4l4\") on node \"crc\" DevicePath \"\"" Jan 26 12:47:56 crc kubenswrapper[4844]: I0126 12:47:56.187483 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zmdx"] Jan 26 12:47:56 crc kubenswrapper[4844]: I0126 12:47:56.190561 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zmdx"] Jan 26 12:47:57 crc kubenswrapper[4844]: I0126 12:47:57.323551 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354b9578-ac43-4a15-831f-d6ae0bc5c449" path="/var/lib/kubelet/pods/354b9578-ac43-4a15-831f-d6ae0bc5c449/volumes" Jan 26 12:48:00 crc kubenswrapper[4844]: I0126 12:48:00.041883 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-982kx" Jan 26 12:48:00 crc kubenswrapper[4844]: I0126 12:48:00.042727 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-982kx" Jan 26 12:48:00 crc kubenswrapper[4844]: I0126 12:48:00.115342 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-982kx" Jan 26 12:48:00 crc kubenswrapper[4844]: I0126 12:48:00.533076 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8hnm5" Jan 26 12:48:00 crc kubenswrapper[4844]: I0126 12:48:00.687247 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bnlhz" Jan 26 12:48:00 crc kubenswrapper[4844]: I0126 12:48:00.939359 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-982kx" Jan 26 12:48:03 crc kubenswrapper[4844]: I0126 12:48:03.555426 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8hnm5"] Jan 26 12:48:03 crc kubenswrapper[4844]: I0126 12:48:03.556676 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8hnm5" podUID="1f204088-0679-4c31-bd2b-848fc4f93b21" containerName="registry-server" containerID="cri-o://52b24656eb293c56278e9835f8abc0ba0024bfe3e7c2b17e9337708f0558813f" gracePeriod=2 Jan 26 12:48:03 crc kubenswrapper[4844]: I0126 12:48:03.924834 4844 generic.go:334] "Generic (PLEG): container finished" podID="1f204088-0679-4c31-bd2b-848fc4f93b21" containerID="52b24656eb293c56278e9835f8abc0ba0024bfe3e7c2b17e9337708f0558813f" exitCode=0 Jan 26 12:48:03 crc kubenswrapper[4844]: I0126 12:48:03.924891 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hnm5" event={"ID":"1f204088-0679-4c31-bd2b-848fc4f93b21","Type":"ContainerDied","Data":"52b24656eb293c56278e9835f8abc0ba0024bfe3e7c2b17e9337708f0558813f"} Jan 26 12:48:03 crc kubenswrapper[4844]: I0126 12:48:03.925760 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hnm5" event={"ID":"1f204088-0679-4c31-bd2b-848fc4f93b21","Type":"ContainerDied","Data":"d395d3d35cc088d8e873ff86740bf1a3437c13a60d19822d0a54c7d6e63d35c8"} Jan 26 12:48:03 crc kubenswrapper[4844]: I0126 12:48:03.925783 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d395d3d35cc088d8e873ff86740bf1a3437c13a60d19822d0a54c7d6e63d35c8" Jan 26 12:48:03 crc kubenswrapper[4844]: I0126 12:48:03.955828 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8hnm5" Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.069296 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f204088-0679-4c31-bd2b-848fc4f93b21-utilities\") pod \"1f204088-0679-4c31-bd2b-848fc4f93b21\" (UID: \"1f204088-0679-4c31-bd2b-848fc4f93b21\") " Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.069365 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5km9\" (UniqueName: \"kubernetes.io/projected/1f204088-0679-4c31-bd2b-848fc4f93b21-kube-api-access-n5km9\") pod \"1f204088-0679-4c31-bd2b-848fc4f93b21\" (UID: \"1f204088-0679-4c31-bd2b-848fc4f93b21\") " Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.069402 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f204088-0679-4c31-bd2b-848fc4f93b21-catalog-content\") pod \"1f204088-0679-4c31-bd2b-848fc4f93b21\" (UID: \"1f204088-0679-4c31-bd2b-848fc4f93b21\") " Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.070350 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f204088-0679-4c31-bd2b-848fc4f93b21-utilities" (OuterVolumeSpecName: "utilities") pod "1f204088-0679-4c31-bd2b-848fc4f93b21" (UID: "1f204088-0679-4c31-bd2b-848fc4f93b21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.078976 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f204088-0679-4c31-bd2b-848fc4f93b21-kube-api-access-n5km9" (OuterVolumeSpecName: "kube-api-access-n5km9") pod "1f204088-0679-4c31-bd2b-848fc4f93b21" (UID: "1f204088-0679-4c31-bd2b-848fc4f93b21"). InnerVolumeSpecName "kube-api-access-n5km9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.135586 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f204088-0679-4c31-bd2b-848fc4f93b21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f204088-0679-4c31-bd2b-848fc4f93b21" (UID: "1f204088-0679-4c31-bd2b-848fc4f93b21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.171568 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f204088-0679-4c31-bd2b-848fc4f93b21-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.171667 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f204088-0679-4c31-bd2b-848fc4f93b21-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.171701 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5km9\" (UniqueName: \"kubernetes.io/projected/1f204088-0679-4c31-bd2b-848fc4f93b21-kube-api-access-n5km9\") on node \"crc\" DevicePath \"\"" Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.937491 4844 generic.go:334] "Generic (PLEG): container finished" podID="2a0ca290-d48e-4c46-8c36-1e414126c42f" containerID="69817e3c002e1abb40acd7a079830cf2cdc351e0e4fa0de59899cf432f03bd45" exitCode=0 Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.937663 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dn4m8" event={"ID":"2a0ca290-d48e-4c46-8c36-1e414126c42f","Type":"ContainerDied","Data":"69817e3c002e1abb40acd7a079830cf2cdc351e0e4fa0de59899cf432f03bd45"} Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.940819 4844 generic.go:334] "Generic (PLEG): container finished" podID="a37a9c59-7c20-4326-b280-9dbd2d633e0b" containerID="7e1fa8f2e1f7283fd46bc1920be2a595f9dcec895b40b91e507a174b1439e365" exitCode=0 Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.940880 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhjls" event={"ID":"a37a9c59-7c20-4326-b280-9dbd2d633e0b","Type":"ContainerDied","Data":"7e1fa8f2e1f7283fd46bc1920be2a595f9dcec895b40b91e507a174b1439e365"} Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.943674 4844 generic.go:334] "Generic (PLEG): container finished" podID="637c7ba4-2cae-4d56-860f-ab82722169a2" containerID="f0c16bd2a3660b20ac550315485247a49fdd58ecbdc0fd3acc52987525740e1e" exitCode=0 Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.943785 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-djrt9" event={"ID":"637c7ba4-2cae-4d56-860f-ab82722169a2","Type":"ContainerDied","Data":"f0c16bd2a3660b20ac550315485247a49fdd58ecbdc0fd3acc52987525740e1e"} Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.947454 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hdq2" event={"ID":"d60e5f01-76f1-47a0-8a7d-390457ce1b47","Type":"ContainerStarted","Data":"eac84807bdc05230adf2521f712ba6368e54b87d69fc89a4b300dc23cdc751a6"} Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.947493 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8hnm5" Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.953910 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bnlhz"] Jan 26 12:48:04 crc kubenswrapper[4844]: I0126 12:48:04.954187 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bnlhz" podUID="8ddfeacb-de87-47d6-913e-6c2333a7df93" containerName="registry-server" containerID="cri-o://b9ab3a1cccafea6d60290bb7a2962bd94bb13e3c6bfb87ba6b432556d3d15082" gracePeriod=2 Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.042307 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8hnm5"] Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.062392 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8hnm5"] Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.308103 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnlhz" Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.320301 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f204088-0679-4c31-bd2b-848fc4f93b21" path="/var/lib/kubelet/pods/1f204088-0679-4c31-bd2b-848fc4f93b21/volumes" Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.390389 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66f2t\" (UniqueName: \"kubernetes.io/projected/8ddfeacb-de87-47d6-913e-6c2333a7df93-kube-api-access-66f2t\") pod \"8ddfeacb-de87-47d6-913e-6c2333a7df93\" (UID: \"8ddfeacb-de87-47d6-913e-6c2333a7df93\") " Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.390421 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ddfeacb-de87-47d6-913e-6c2333a7df93-catalog-content\") pod \"8ddfeacb-de87-47d6-913e-6c2333a7df93\" (UID: \"8ddfeacb-de87-47d6-913e-6c2333a7df93\") " Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.390442 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ddfeacb-de87-47d6-913e-6c2333a7df93-utilities\") pod \"8ddfeacb-de87-47d6-913e-6c2333a7df93\" (UID: \"8ddfeacb-de87-47d6-913e-6c2333a7df93\") " Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.391724 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ddfeacb-de87-47d6-913e-6c2333a7df93-utilities" (OuterVolumeSpecName: "utilities") pod "8ddfeacb-de87-47d6-913e-6c2333a7df93" (UID: "8ddfeacb-de87-47d6-913e-6c2333a7df93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.396874 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ddfeacb-de87-47d6-913e-6c2333a7df93-kube-api-access-66f2t" (OuterVolumeSpecName: "kube-api-access-66f2t") pod "8ddfeacb-de87-47d6-913e-6c2333a7df93" (UID: "8ddfeacb-de87-47d6-913e-6c2333a7df93"). InnerVolumeSpecName "kube-api-access-66f2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.454511 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ddfeacb-de87-47d6-913e-6c2333a7df93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ddfeacb-de87-47d6-913e-6c2333a7df93" (UID: "8ddfeacb-de87-47d6-913e-6c2333a7df93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.491656 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66f2t\" (UniqueName: \"kubernetes.io/projected/8ddfeacb-de87-47d6-913e-6c2333a7df93-kube-api-access-66f2t\") on node \"crc\" DevicePath \"\"" Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.491708 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ddfeacb-de87-47d6-913e-6c2333a7df93-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.491727 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ddfeacb-de87-47d6-913e-6c2333a7df93-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.954119 4844 generic.go:334] "Generic (PLEG): container finished" podID="d60e5f01-76f1-47a0-8a7d-390457ce1b47" containerID="eac84807bdc05230adf2521f712ba6368e54b87d69fc89a4b300dc23cdc751a6" exitCode=0 Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.954189 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hdq2" event={"ID":"d60e5f01-76f1-47a0-8a7d-390457ce1b47","Type":"ContainerDied","Data":"eac84807bdc05230adf2521f712ba6368e54b87d69fc89a4b300dc23cdc751a6"} Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.956968 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dn4m8" event={"ID":"2a0ca290-d48e-4c46-8c36-1e414126c42f","Type":"ContainerStarted","Data":"e047dab01636f159be3820152efe95dd0eb0388b17cdab1c934078e59efc60a0"} Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.961528 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhjls" event={"ID":"a37a9c59-7c20-4326-b280-9dbd2d633e0b","Type":"ContainerStarted","Data":"42a78e03542d65f23fc8a5831e890c81922e19014aacd781c69a43ce23f71f5f"} Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.967347 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-djrt9" event={"ID":"637c7ba4-2cae-4d56-860f-ab82722169a2","Type":"ContainerStarted","Data":"85bb5de5b055d83bd1e007d1bac7699f58e8bb5785ec40e961cd2624a3a35964"} Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.972765 4844 generic.go:334] "Generic (PLEG): container finished" podID="8ddfeacb-de87-47d6-913e-6c2333a7df93" containerID="b9ab3a1cccafea6d60290bb7a2962bd94bb13e3c6bfb87ba6b432556d3d15082" exitCode=0 Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.972814 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnlhz" event={"ID":"8ddfeacb-de87-47d6-913e-6c2333a7df93","Type":"ContainerDied","Data":"b9ab3a1cccafea6d60290bb7a2962bd94bb13e3c6bfb87ba6b432556d3d15082"} Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.972843 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnlhz" event={"ID":"8ddfeacb-de87-47d6-913e-6c2333a7df93","Type":"ContainerDied","Data":"12509affa0fe7bc7b7696d3a27634ab4649132cec28a469a9190565664f61d54"} Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.972850 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnlhz" Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.972865 4844 scope.go:117] "RemoveContainer" containerID="b9ab3a1cccafea6d60290bb7a2962bd94bb13e3c6bfb87ba6b432556d3d15082" Jan 26 12:48:05 crc kubenswrapper[4844]: I0126 12:48:05.996519 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-djrt9" podStartSLOduration=8.842147664 podStartE2EDuration="1m34.996494888s" podCreationTimestamp="2026-01-26 12:46:31 +0000 UTC" firstStartedPulling="2026-01-26 12:46:39.233623852 +0000 UTC m=+176.166991464" lastFinishedPulling="2026-01-26 12:48:05.387971076 +0000 UTC m=+262.321338688" observedRunningTime="2026-01-26 12:48:05.994276863 +0000 UTC m=+262.927644475" watchObservedRunningTime="2026-01-26 12:48:05.996494888 +0000 UTC m=+262.929862520" Jan 26 12:48:06 crc kubenswrapper[4844]: I0126 12:48:06.035802 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dn4m8" podStartSLOduration=6.90097387 podStartE2EDuration="1m33.035782289s" podCreationTimestamp="2026-01-26 12:46:33 +0000 UTC" firstStartedPulling="2026-01-26 12:46:39.196482637 +0000 UTC m=+176.129850249" lastFinishedPulling="2026-01-26 12:48:05.331291056 +0000 UTC m=+262.264658668" observedRunningTime="2026-01-26 12:48:06.014454862 +0000 UTC m=+262.947822474" watchObservedRunningTime="2026-01-26 12:48:06.035782289 +0000 UTC m=+262.969149891" Jan 26 12:48:06 crc kubenswrapper[4844]: I0126 12:48:06.046169 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lhjls" podStartSLOduration=10.794315011 podStartE2EDuration="1m37.046148935s" podCreationTimestamp="2026-01-26 12:46:29 +0000 UTC" firstStartedPulling="2026-01-26 12:46:39.189218794 +0000 UTC m=+176.122586406" lastFinishedPulling="2026-01-26 12:48:05.441052718 +0000 UTC m=+262.374420330" observedRunningTime="2026-01-26 12:48:06.036434234 +0000 UTC m=+262.969801866" watchObservedRunningTime="2026-01-26 12:48:06.046148935 +0000 UTC m=+262.979516547" Jan 26 12:48:06 crc kubenswrapper[4844]: I0126 12:48:06.048975 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bnlhz"] Jan 26 12:48:06 crc kubenswrapper[4844]: I0126 12:48:06.053410 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bnlhz"] Jan 26 12:48:06 crc kubenswrapper[4844]: I0126 12:48:06.585776 4844 scope.go:117] "RemoveContainer" containerID="e1b9f9fe590059a479719c9d04f0c25c441d4432edcfd4db0115e2bf17679d94" Jan 26 12:48:06 crc kubenswrapper[4844]: I0126 12:48:06.605787 4844 scope.go:117] "RemoveContainer" containerID="06f2cea9bc9ade7d2c232187e6bbf792b20d8e3c442b073f0f685cdcdd43972d" Jan 26 12:48:06 crc kubenswrapper[4844]: I0126 12:48:06.621469 4844 scope.go:117] "RemoveContainer" containerID="b9ab3a1cccafea6d60290bb7a2962bd94bb13e3c6bfb87ba6b432556d3d15082" Jan 26 12:48:06 crc kubenswrapper[4844]: E0126 12:48:06.621984 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9ab3a1cccafea6d60290bb7a2962bd94bb13e3c6bfb87ba6b432556d3d15082\": container with ID starting with b9ab3a1cccafea6d60290bb7a2962bd94bb13e3c6bfb87ba6b432556d3d15082 not found: ID does not exist" containerID="b9ab3a1cccafea6d60290bb7a2962bd94bb13e3c6bfb87ba6b432556d3d15082" Jan 26 12:48:06 crc kubenswrapper[4844]: I0126 12:48:06.622020 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9ab3a1cccafea6d60290bb7a2962bd94bb13e3c6bfb87ba6b432556d3d15082"} err="failed to get container status \"b9ab3a1cccafea6d60290bb7a2962bd94bb13e3c6bfb87ba6b432556d3d15082\": rpc error: code = NotFound desc = could not find container \"b9ab3a1cccafea6d60290bb7a2962bd94bb13e3c6bfb87ba6b432556d3d15082\": container with ID starting with b9ab3a1cccafea6d60290bb7a2962bd94bb13e3c6bfb87ba6b432556d3d15082 not found: ID does not exist" Jan 26 12:48:06 crc kubenswrapper[4844]: I0126 12:48:06.622044 4844 scope.go:117] "RemoveContainer" containerID="e1b9f9fe590059a479719c9d04f0c25c441d4432edcfd4db0115e2bf17679d94" Jan 26 12:48:06 crc kubenswrapper[4844]: E0126 12:48:06.622357 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1b9f9fe590059a479719c9d04f0c25c441d4432edcfd4db0115e2bf17679d94\": container with ID starting with e1b9f9fe590059a479719c9d04f0c25c441d4432edcfd4db0115e2bf17679d94 not found: ID does not exist" containerID="e1b9f9fe590059a479719c9d04f0c25c441d4432edcfd4db0115e2bf17679d94" Jan 26 12:48:06 crc kubenswrapper[4844]: I0126 12:48:06.622383 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1b9f9fe590059a479719c9d04f0c25c441d4432edcfd4db0115e2bf17679d94"} err="failed to get container status \"e1b9f9fe590059a479719c9d04f0c25c441d4432edcfd4db0115e2bf17679d94\": rpc error: code = NotFound desc = could not find container \"e1b9f9fe590059a479719c9d04f0c25c441d4432edcfd4db0115e2bf17679d94\": container with ID starting with e1b9f9fe590059a479719c9d04f0c25c441d4432edcfd4db0115e2bf17679d94 not found: ID does not exist" Jan 26 12:48:06 crc kubenswrapper[4844]: I0126 12:48:06.622396 4844 scope.go:117] "RemoveContainer" containerID="06f2cea9bc9ade7d2c232187e6bbf792b20d8e3c442b073f0f685cdcdd43972d" Jan 26 12:48:06 crc kubenswrapper[4844]: E0126 12:48:06.622808 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06f2cea9bc9ade7d2c232187e6bbf792b20d8e3c442b073f0f685cdcdd43972d\": container with ID starting with 06f2cea9bc9ade7d2c232187e6bbf792b20d8e3c442b073f0f685cdcdd43972d not found: ID does not exist" containerID="06f2cea9bc9ade7d2c232187e6bbf792b20d8e3c442b073f0f685cdcdd43972d" Jan 26 12:48:06 crc kubenswrapper[4844]: I0126 12:48:06.622841 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f2cea9bc9ade7d2c232187e6bbf792b20d8e3c442b073f0f685cdcdd43972d"} err="failed to get container status \"06f2cea9bc9ade7d2c232187e6bbf792b20d8e3c442b073f0f685cdcdd43972d\": rpc error: code = NotFound desc = could not find container \"06f2cea9bc9ade7d2c232187e6bbf792b20d8e3c442b073f0f685cdcdd43972d\": container with ID starting with 06f2cea9bc9ade7d2c232187e6bbf792b20d8e3c442b073f0f685cdcdd43972d not found: ID does not exist" Jan 26 12:48:06 crc kubenswrapper[4844]: I0126 12:48:06.980959 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hdq2" event={"ID":"d60e5f01-76f1-47a0-8a7d-390457ce1b47","Type":"ContainerStarted","Data":"1065967f7b2abda19bab9f01f363f18504bd76dc4ee78f25e51a2db69e0423b7"} Jan 26 12:48:06 crc kubenswrapper[4844]: I0126 12:48:06.998449 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8hdq2" podStartSLOduration=7.467666259 podStartE2EDuration="1m34.998428874s" podCreationTimestamp="2026-01-26 12:46:32 +0000 UTC" firstStartedPulling="2026-01-26 12:46:39.193977423 +0000 UTC m=+176.127345025" lastFinishedPulling="2026-01-26 12:48:06.724740038 +0000 UTC m=+263.658107640" observedRunningTime="2026-01-26 12:48:06.996229979 +0000 UTC m=+263.929597601" watchObservedRunningTime="2026-01-26 12:48:06.998428874 +0000 UTC m=+263.931796496" Jan 26 12:48:07 crc kubenswrapper[4844]: I0126 12:48:07.323910 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ddfeacb-de87-47d6-913e-6c2333a7df93" path="/var/lib/kubelet/pods/8ddfeacb-de87-47d6-913e-6c2333a7df93/volumes" Jan 26 12:48:10 crc kubenswrapper[4844]: I0126 12:48:10.254383 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lhjls" Jan 26 12:48:10 crc kubenswrapper[4844]: I0126 12:48:10.255807 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lhjls" Jan 26 12:48:10 crc kubenswrapper[4844]: I0126 12:48:10.305075 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lhjls" Jan 26 12:48:11 crc kubenswrapper[4844]: I0126 12:48:11.062986 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lhjls" Jan 26 12:48:12 crc kubenswrapper[4844]: I0126 12:48:12.458380 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-djrt9" Jan 26 12:48:12 crc kubenswrapper[4844]: I0126 12:48:12.458835 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-djrt9" Jan 26 12:48:12 crc kubenswrapper[4844]: I0126 12:48:12.540623 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-djrt9" Jan 26 12:48:13 crc kubenswrapper[4844]: I0126 12:48:13.090849 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-djrt9" Jan 26 12:48:13 crc kubenswrapper[4844]: I0126 12:48:13.260526 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8hdq2" Jan 26 12:48:13 crc kubenswrapper[4844]: I0126 12:48:13.260939 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8hdq2" Jan 26 12:48:13 crc kubenswrapper[4844]: I0126 12:48:13.328239 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8hdq2" Jan 26 12:48:13 crc kubenswrapper[4844]: I0126 12:48:13.670907 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dn4m8" Jan 26 12:48:13 crc kubenswrapper[4844]: I0126 12:48:13.670973 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dn4m8" Jan 26 12:48:13 crc kubenswrapper[4844]: I0126 12:48:13.723456 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dn4m8" Jan 26 12:48:14 crc kubenswrapper[4844]: I0126 12:48:14.077503 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dn4m8" Jan 26 12:48:14 crc kubenswrapper[4844]: I0126 12:48:14.091550 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8hdq2" Jan 26 12:48:15 crc kubenswrapper[4844]: I0126 12:48:15.554695 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dn4m8"] Jan 26 12:48:16 crc kubenswrapper[4844]: I0126 12:48:16.036560 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dn4m8" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" containerName="registry-server" containerID="cri-o://e047dab01636f159be3820152efe95dd0eb0388b17cdab1c934078e59efc60a0" gracePeriod=2 Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:23.187151 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dn4m8_2a0ca290-d48e-4c46-8c36-1e414126c42f/registry-server/0.log" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:23.188576 4844 generic.go:334] "Generic (PLEG): container finished" podID="2a0ca290-d48e-4c46-8c36-1e414126c42f" containerID="e047dab01636f159be3820152efe95dd0eb0388b17cdab1c934078e59efc60a0" exitCode=137 Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:23.188636 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dn4m8" event={"ID":"2a0ca290-d48e-4c46-8c36-1e414126c42f","Type":"ContainerDied","Data":"e047dab01636f159be3820152efe95dd0eb0388b17cdab1c934078e59efc60a0"} Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:23.671365 4844 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e047dab01636f159be3820152efe95dd0eb0388b17cdab1c934078e59efc60a0 is running failed: container process not found" containerID="e047dab01636f159be3820152efe95dd0eb0388b17cdab1c934078e59efc60a0" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:23.672685 4844 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e047dab01636f159be3820152efe95dd0eb0388b17cdab1c934078e59efc60a0 is running failed: container process not found" containerID="e047dab01636f159be3820152efe95dd0eb0388b17cdab1c934078e59efc60a0" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:23.673349 4844 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e047dab01636f159be3820152efe95dd0eb0388b17cdab1c934078e59efc60a0 is running failed: container process not found" containerID="e047dab01636f159be3820152efe95dd0eb0388b17cdab1c934078e59efc60a0" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:23.673385 4844 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e047dab01636f159be3820152efe95dd0eb0388b17cdab1c934078e59efc60a0 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-dn4m8" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" containerName="registry-server" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.172672 4844 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:24.173070 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ddfeacb-de87-47d6-913e-6c2333a7df93" containerName="extract-utilities" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.173087 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ddfeacb-de87-47d6-913e-6c2333a7df93" containerName="extract-utilities" Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:24.173099 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ddfeacb-de87-47d6-913e-6c2333a7df93" containerName="extract-content" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.173106 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ddfeacb-de87-47d6-913e-6c2333a7df93" containerName="extract-content" Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:24.173120 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354b9578-ac43-4a15-831f-d6ae0bc5c449" containerName="registry-server" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.173127 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="354b9578-ac43-4a15-831f-d6ae0bc5c449" containerName="registry-server" Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:24.173138 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f204088-0679-4c31-bd2b-848fc4f93b21" containerName="extract-utilities" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.173144 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f204088-0679-4c31-bd2b-848fc4f93b21" containerName="extract-utilities" Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:24.173154 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f204088-0679-4c31-bd2b-848fc4f93b21" containerName="extract-content" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.173161 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f204088-0679-4c31-bd2b-848fc4f93b21" containerName="extract-content" Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:24.173167 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ddfeacb-de87-47d6-913e-6c2333a7df93" containerName="registry-server" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.173174 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ddfeacb-de87-47d6-913e-6c2333a7df93" containerName="registry-server" Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:24.173183 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47ef90d-e345-4eee-ba48-ed2e46f12668" containerName="pruner" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.173189 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47ef90d-e345-4eee-ba48-ed2e46f12668" containerName="pruner" Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:24.173205 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f204088-0679-4c31-bd2b-848fc4f93b21" containerName="registry-server" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.173212 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f204088-0679-4c31-bd2b-848fc4f93b21" containerName="registry-server" Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:24.173221 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354b9578-ac43-4a15-831f-d6ae0bc5c449" containerName="extract-content" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.173228 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="354b9578-ac43-4a15-831f-d6ae0bc5c449" containerName="extract-content" Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:24.173240 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354b9578-ac43-4a15-831f-d6ae0bc5c449" containerName="extract-utilities" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.173248 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="354b9578-ac43-4a15-831f-d6ae0bc5c449" containerName="extract-utilities" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.173359 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="c47ef90d-e345-4eee-ba48-ed2e46f12668" containerName="pruner" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.173372 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f204088-0679-4c31-bd2b-848fc4f93b21" containerName="registry-server" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.173387 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ddfeacb-de87-47d6-913e-6c2333a7df93" containerName="registry-server" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.173397 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="354b9578-ac43-4a15-831f-d6ae0bc5c449" containerName="registry-server" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.174210 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.174521 4844 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.175188 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2" gracePeriod=15 Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.175280 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136" gracePeriod=15 Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.175447 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7" gracePeriod=15 Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.175452 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b" gracePeriod=15 Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.175589 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e" gracePeriod=15 Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.178552 4844 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:24.178750 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.178764 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:24.178778 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.178788 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:24.178800 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.178807 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:24.178817 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.178823 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:24.178831 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.178838 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:24.178851 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.178859 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.179006 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.179023 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.179033 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.179043 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.179053 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.235448 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.253493 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.253561 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.253666 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.253693 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.253729 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.253929 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.254004 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.254053 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.355193 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.355253 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.355293 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.355311 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.355336 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.355371 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.355363 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.355425 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.355387 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.355388 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.355388 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.355426 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.355426 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.355504 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.355620 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.355573 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.481922 4844 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.482393 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.529431 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:48:27 crc kubenswrapper[4844]: W0126 12:48:24.553267 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-219041e378362b4a6e1cd052c545b343ddad8a8c8c9c45e8363fbdff2ca177a9 WatchSource:0}: Error finding container 219041e378362b4a6e1cd052c545b343ddad8a8c8c9c45e8363fbdff2ca177a9: Status 404 returned error can't find the container with id 219041e378362b4a6e1cd052c545b343ddad8a8c8c9c45e8363fbdff2ca177a9 Jan 26 12:48:27 crc kubenswrapper[4844]: E0126 12:48:24.556394 4844 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.142:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e48c2ac5cb359 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 12:48:24.555795289 +0000 UTC m=+281.489162901,LastTimestamp:2026-01-26 12:48:24.555795289 +0000 UTC m=+281.489162901,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.825234 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dn4m8_2a0ca290-d48e-4c46-8c36-1e414126c42f/registry-server/0.log" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.826531 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dn4m8" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.827269 4844 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.827953 4844 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.828471 4844 status_manager.go:851] "Failed to get status for pod" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" pod="openshift-marketplace/redhat-operators-dn4m8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dn4m8\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.962810 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0ca290-d48e-4c46-8c36-1e414126c42f-catalog-content\") pod \"2a0ca290-d48e-4c46-8c36-1e414126c42f\" (UID: \"2a0ca290-d48e-4c46-8c36-1e414126c42f\") " Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.962918 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0ca290-d48e-4c46-8c36-1e414126c42f-utilities\") pod \"2a0ca290-d48e-4c46-8c36-1e414126c42f\" (UID: \"2a0ca290-d48e-4c46-8c36-1e414126c42f\") " Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.963008 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8hw6\" (UniqueName: \"kubernetes.io/projected/2a0ca290-d48e-4c46-8c36-1e414126c42f-kube-api-access-h8hw6\") pod \"2a0ca290-d48e-4c46-8c36-1e414126c42f\" (UID: \"2a0ca290-d48e-4c46-8c36-1e414126c42f\") " Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.964222 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a0ca290-d48e-4c46-8c36-1e414126c42f-utilities" (OuterVolumeSpecName: "utilities") pod "2a0ca290-d48e-4c46-8c36-1e414126c42f" (UID: "2a0ca290-d48e-4c46-8c36-1e414126c42f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:24.970870 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a0ca290-d48e-4c46-8c36-1e414126c42f-kube-api-access-h8hw6" (OuterVolumeSpecName: "kube-api-access-h8hw6") pod "2a0ca290-d48e-4c46-8c36-1e414126c42f" (UID: "2a0ca290-d48e-4c46-8c36-1e414126c42f"). InnerVolumeSpecName "kube-api-access-h8hw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:25.064731 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0ca290-d48e-4c46-8c36-1e414126c42f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:25.064767 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8hw6\" (UniqueName: \"kubernetes.io/projected/2a0ca290-d48e-4c46-8c36-1e414126c42f-kube-api-access-h8hw6\") on node \"crc\" DevicePath \"\"" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:25.215431 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:25.216431 4844 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b" exitCode=2 Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:25.217938 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"219041e378362b4a6e1cd052c545b343ddad8a8c8c9c45e8363fbdff2ca177a9"} Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:25.219924 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dn4m8_2a0ca290-d48e-4c46-8c36-1e414126c42f/registry-server/0.log" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:25.221398 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dn4m8" event={"ID":"2a0ca290-d48e-4c46-8c36-1e414126c42f","Type":"ContainerDied","Data":"e95a1af869504bec027d5bd3be38c143eaf95185af0cf44b85ac7e3541cc025b"} Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:25.221443 4844 scope.go:117] "RemoveContainer" containerID="e047dab01636f159be3820152efe95dd0eb0388b17cdab1c934078e59efc60a0" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:25.221531 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dn4m8" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:25.222406 4844 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:25.222960 4844 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:25.224244 4844 status_manager.go:851] "Failed to get status for pod" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" pod="openshift-marketplace/redhat-operators-dn4m8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dn4m8\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:25.241080 4844 scope.go:117] "RemoveContainer" containerID="69817e3c002e1abb40acd7a079830cf2cdc351e0e4fa0de59899cf432f03bd45" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:25.260149 4844 scope.go:117] "RemoveContainer" containerID="bfe63e859e48fcf824a06504bfc9bcf6807a460ed1665a325ef8ddb893ad001f" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:25.292026 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a0ca290-d48e-4c46-8c36-1e414126c42f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a0ca290-d48e-4c46-8c36-1e414126c42f" (UID: "2a0ca290-d48e-4c46-8c36-1e414126c42f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:25.368031 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0ca290-d48e-4c46-8c36-1e414126c42f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:25.525899 4844 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:25.526402 4844 status_manager.go:851] "Failed to get status for pod" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" pod="openshift-marketplace/redhat-operators-dn4m8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dn4m8\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:27.257101 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:27.258537 4844 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2" exitCode=0 Jan 26 12:48:27 crc kubenswrapper[4844]: I0126 12:48:27.258563 4844 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7" exitCode=0 Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.265251 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5aa26b8e17d5c95e2b540b7cf1fafffdc854885737d228d00892f4c8f14a13fb"} Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.265895 4844 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.266287 4844 status_manager.go:851] "Failed to get status for pod" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" pod="openshift-marketplace/redhat-operators-dn4m8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dn4m8\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.269656 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.270373 4844 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136" exitCode=0 Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.270407 4844 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e" exitCode=0 Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.272258 4844 generic.go:334] "Generic (PLEG): container finished" podID="940c9b8a-a28e-4fb7-be00-c2f6f4bba416" containerID="3b90114e3f9f4e26652be74878efc251bf062fb5c38d541a4f6389b5a2230361" exitCode=0 Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.272337 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"940c9b8a-a28e-4fb7-be00-c2f6f4bba416","Type":"ContainerDied","Data":"3b90114e3f9f4e26652be74878efc251bf062fb5c38d541a4f6389b5a2230361"} Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.273250 4844 status_manager.go:851] "Failed to get status for pod" podUID="940c9b8a-a28e-4fb7-be00-c2f6f4bba416" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.273759 4844 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.274265 4844 status_manager.go:851] "Failed to get status for pod" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" pod="openshift-marketplace/redhat-operators-dn4m8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dn4m8\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.680695 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.682142 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.682762 4844 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.684847 4844 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.685355 4844 status_manager.go:851] "Failed to get status for pod" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" pod="openshift-marketplace/redhat-operators-dn4m8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dn4m8\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.685701 4844 status_manager.go:851] "Failed to get status for pod" podUID="940c9b8a-a28e-4fb7-be00-c2f6f4bba416" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.825433 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.825561 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.825633 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.826156 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.826195 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.826215 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.927567 4844 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.927640 4844 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:48:28 crc kubenswrapper[4844]: I0126 12:48:28.927654 4844 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.281430 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.283323 4844 scope.go:117] "RemoveContainer" containerID="9207454bb697abf7b64a37e402b50734ae419605cd0938b514ac4f2a74561ce2" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.283370 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.303357 4844 status_manager.go:851] "Failed to get status for pod" podUID="940c9b8a-a28e-4fb7-be00-c2f6f4bba416" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.304130 4844 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.304350 4844 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.304572 4844 status_manager.go:851] "Failed to get status for pod" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" pod="openshift-marketplace/redhat-operators-dn4m8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dn4m8\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.324420 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.669197 4844 scope.go:117] "RemoveContainer" containerID="b40515c0e8056b01e4d48e259ece9389fbb5db6c0d403f2846f28948ce78b0b7" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.731098 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.732018 4844 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.732271 4844 status_manager.go:851] "Failed to get status for pod" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" pod="openshift-marketplace/redhat-operators-dn4m8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dn4m8\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.732408 4844 scope.go:117] "RemoveContainer" containerID="e33100e90b0d5a23baf7c27076a19dbca18d3e067be6b5a867e122fac37c1136" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.732490 4844 status_manager.go:851] "Failed to get status for pod" podUID="940c9b8a-a28e-4fb7-be00-c2f6f4bba416" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.743290 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/940c9b8a-a28e-4fb7-be00-c2f6f4bba416-var-lock\") pod \"940c9b8a-a28e-4fb7-be00-c2f6f4bba416\" (UID: \"940c9b8a-a28e-4fb7-be00-c2f6f4bba416\") " Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.743436 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/940c9b8a-a28e-4fb7-be00-c2f6f4bba416-kube-api-access\") pod \"940c9b8a-a28e-4fb7-be00-c2f6f4bba416\" (UID: \"940c9b8a-a28e-4fb7-be00-c2f6f4bba416\") " Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.743470 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/940c9b8a-a28e-4fb7-be00-c2f6f4bba416-kubelet-dir\") pod \"940c9b8a-a28e-4fb7-be00-c2f6f4bba416\" (UID: \"940c9b8a-a28e-4fb7-be00-c2f6f4bba416\") " Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.743872 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/940c9b8a-a28e-4fb7-be00-c2f6f4bba416-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "940c9b8a-a28e-4fb7-be00-c2f6f4bba416" (UID: "940c9b8a-a28e-4fb7-be00-c2f6f4bba416"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.743916 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/940c9b8a-a28e-4fb7-be00-c2f6f4bba416-var-lock" (OuterVolumeSpecName: "var-lock") pod "940c9b8a-a28e-4fb7-be00-c2f6f4bba416" (UID: "940c9b8a-a28e-4fb7-be00-c2f6f4bba416"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.750309 4844 scope.go:117] "RemoveContainer" containerID="64ae899d3a38090964f3d951fd185537ce4ef75f7512342da01f67767d00417b" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.750996 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/940c9b8a-a28e-4fb7-be00-c2f6f4bba416-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "940c9b8a-a28e-4fb7-be00-c2f6f4bba416" (UID: "940c9b8a-a28e-4fb7-be00-c2f6f4bba416"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.773341 4844 scope.go:117] "RemoveContainer" containerID="64c5d1e4b1fda825b451c8acde0411694fbf7345723d6c3927905a882a79d62e" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.797783 4844 scope.go:117] "RemoveContainer" containerID="64a2d4054650382dca2f56688356aec5fa475b49958e86dd4597a64f77a82f81" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.845124 4844 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/940c9b8a-a28e-4fb7-be00-c2f6f4bba416-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.845159 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/940c9b8a-a28e-4fb7-be00-c2f6f4bba416-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 12:48:29 crc kubenswrapper[4844]: I0126 12:48:29.845174 4844 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/940c9b8a-a28e-4fb7-be00-c2f6f4bba416-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:48:30 crc kubenswrapper[4844]: I0126 12:48:30.294319 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 12:48:30 crc kubenswrapper[4844]: I0126 12:48:30.294464 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"940c9b8a-a28e-4fb7-be00-c2f6f4bba416","Type":"ContainerDied","Data":"40d7c018f65f7784d0eb153bc1c12bd9346229adf28c39b2d7d64ee3342edc00"} Jan 26 12:48:30 crc kubenswrapper[4844]: I0126 12:48:30.294496 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40d7c018f65f7784d0eb153bc1c12bd9346229adf28c39b2d7d64ee3342edc00" Jan 26 12:48:30 crc kubenswrapper[4844]: I0126 12:48:30.310460 4844 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:30 crc kubenswrapper[4844]: I0126 12:48:30.311009 4844 status_manager.go:851] "Failed to get status for pod" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" pod="openshift-marketplace/redhat-operators-dn4m8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dn4m8\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:30 crc kubenswrapper[4844]: I0126 12:48:30.311912 4844 status_manager.go:851] "Failed to get status for pod" podUID="940c9b8a-a28e-4fb7-be00-c2f6f4bba416" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:30 crc kubenswrapper[4844]: E0126 12:48:30.585748 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:48:30Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:48:30Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:48:30Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T12:48:30Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:30 crc kubenswrapper[4844]: E0126 12:48:30.586330 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:30 crc kubenswrapper[4844]: E0126 12:48:30.586578 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:30 crc kubenswrapper[4844]: E0126 12:48:30.586946 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:30 crc kubenswrapper[4844]: E0126 12:48:30.587210 4844 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:30 crc kubenswrapper[4844]: E0126 12:48:30.587237 4844 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 12:48:30 crc kubenswrapper[4844]: E0126 12:48:30.836589 4844 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:30 crc kubenswrapper[4844]: E0126 12:48:30.837015 4844 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:30 crc kubenswrapper[4844]: E0126 12:48:30.837744 4844 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:30 crc kubenswrapper[4844]: E0126 12:48:30.838033 4844 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:30 crc kubenswrapper[4844]: E0126 12:48:30.838295 4844 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:30 crc kubenswrapper[4844]: I0126 12:48:30.838330 4844 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 12:48:30 crc kubenswrapper[4844]: E0126 12:48:30.838573 4844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="200ms" Jan 26 12:48:30 crc kubenswrapper[4844]: E0126 12:48:30.841175 4844 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.142:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e48c2ac5cb359 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 12:48:24.555795289 +0000 UTC m=+281.489162901,LastTimestamp:2026-01-26 12:48:24.555795289 +0000 UTC m=+281.489162901,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 12:48:31 crc kubenswrapper[4844]: E0126 12:48:31.039379 4844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="400ms" Jan 26 12:48:31 crc kubenswrapper[4844]: E0126 12:48:31.440904 4844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="800ms" Jan 26 12:48:32 crc kubenswrapper[4844]: E0126 12:48:32.241931 4844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="1.6s" Jan 26 12:48:33 crc kubenswrapper[4844]: I0126 12:48:33.314829 4844 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:33 crc kubenswrapper[4844]: I0126 12:48:33.315455 4844 status_manager.go:851] "Failed to get status for pod" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" pod="openshift-marketplace/redhat-operators-dn4m8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dn4m8\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:33 crc kubenswrapper[4844]: I0126 12:48:33.315825 4844 status_manager.go:851] "Failed to get status for pod" podUID="940c9b8a-a28e-4fb7-be00-c2f6f4bba416" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:33 crc kubenswrapper[4844]: E0126 12:48:33.843811 4844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="3.2s" Jan 26 12:48:37 crc kubenswrapper[4844]: E0126 12:48:37.044564 4844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.142:6443: connect: connection refused" interval="6.4s" Jan 26 12:48:37 crc kubenswrapper[4844]: I0126 12:48:37.312496 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:37 crc kubenswrapper[4844]: I0126 12:48:37.313381 4844 status_manager.go:851] "Failed to get status for pod" podUID="940c9b8a-a28e-4fb7-be00-c2f6f4bba416" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:37 crc kubenswrapper[4844]: I0126 12:48:37.314026 4844 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:37 crc kubenswrapper[4844]: I0126 12:48:37.314264 4844 status_manager.go:851] "Failed to get status for pod" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" pod="openshift-marketplace/redhat-operators-dn4m8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dn4m8\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:37 crc kubenswrapper[4844]: I0126 12:48:37.331302 4844 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aecfc1fc-7b8c-42f4-9a7b-058a0acc9534" Jan 26 12:48:37 crc kubenswrapper[4844]: I0126 12:48:37.331346 4844 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aecfc1fc-7b8c-42f4-9a7b-058a0acc9534" Jan 26 12:48:37 crc kubenswrapper[4844]: E0126 12:48:37.332038 4844 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:37 crc kubenswrapper[4844]: I0126 12:48:37.332548 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:37 crc kubenswrapper[4844]: I0126 12:48:37.777469 4844 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 12:48:37 crc kubenswrapper[4844]: I0126 12:48:37.777877 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 12:48:38 crc kubenswrapper[4844]: I0126 12:48:38.339350 4844 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="c903335fdbe9fb9496693903deede1072a7b566d337b2c2b8b47cc8b1d4d0f92" exitCode=0 Jan 26 12:48:38 crc kubenswrapper[4844]: I0126 12:48:38.339436 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"c903335fdbe9fb9496693903deede1072a7b566d337b2c2b8b47cc8b1d4d0f92"} Jan 26 12:48:38 crc kubenswrapper[4844]: I0126 12:48:38.339481 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d60911062027e360cc017011292a9b237ea50913013165802a47e42662a2b3a5"} Jan 26 12:48:38 crc kubenswrapper[4844]: I0126 12:48:38.339790 4844 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aecfc1fc-7b8c-42f4-9a7b-058a0acc9534" Jan 26 12:48:38 crc kubenswrapper[4844]: I0126 12:48:38.339803 4844 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aecfc1fc-7b8c-42f4-9a7b-058a0acc9534" Jan 26 12:48:38 crc kubenswrapper[4844]: E0126 12:48:38.340211 4844 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:38 crc kubenswrapper[4844]: I0126 12:48:38.340247 4844 status_manager.go:851] "Failed to get status for pod" podUID="940c9b8a-a28e-4fb7-be00-c2f6f4bba416" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:38 crc kubenswrapper[4844]: I0126 12:48:38.340673 4844 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:38 crc kubenswrapper[4844]: I0126 12:48:38.340858 4844 status_manager.go:851] "Failed to get status for pod" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" pod="openshift-marketplace/redhat-operators-dn4m8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dn4m8\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:38 crc kubenswrapper[4844]: I0126 12:48:38.342971 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 12:48:38 crc kubenswrapper[4844]: I0126 12:48:38.343035 4844 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a" exitCode=1 Jan 26 12:48:38 crc kubenswrapper[4844]: I0126 12:48:38.343107 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a"} Jan 26 12:48:38 crc kubenswrapper[4844]: I0126 12:48:38.343714 4844 scope.go:117] "RemoveContainer" containerID="6c79791a1a5ceea7564b6466722f4fb48a6729184724efc0ca2498896def357a" Jan 26 12:48:38 crc kubenswrapper[4844]: I0126 12:48:38.344347 4844 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:38 crc kubenswrapper[4844]: I0126 12:48:38.344792 4844 status_manager.go:851] "Failed to get status for pod" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" pod="openshift-marketplace/redhat-operators-dn4m8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dn4m8\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:38 crc kubenswrapper[4844]: I0126 12:48:38.345293 4844 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:38 crc kubenswrapper[4844]: I0126 12:48:38.345587 4844 status_manager.go:851] "Failed to get status for pod" podUID="940c9b8a-a28e-4fb7-be00-c2f6f4bba416" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.142:6443: connect: connection refused" Jan 26 12:48:39 crc kubenswrapper[4844]: I0126 12:48:39.351062 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"123378988b1070686e86f97d7aa5adfd321a97fe451ea786f250ab5a1d394b2e"} Jan 26 12:48:39 crc kubenswrapper[4844]: I0126 12:48:39.354536 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 12:48:39 crc kubenswrapper[4844]: I0126 12:48:39.354628 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c0418f4eb7e9920aa9676c3d372c84020df56d9d22ecbb8405e8355b9f9e98d5"} Jan 26 12:48:40 crc kubenswrapper[4844]: I0126 12:48:40.362322 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cf7dc8ddc7ff05d558490fdf919ae2c3df6cf577c807708b9bf3dac1c62cb9df"} Jan 26 12:48:40 crc kubenswrapper[4844]: I0126 12:48:40.600436 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:48:40 crc kubenswrapper[4844]: I0126 12:48:40.616991 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:48:41 crc kubenswrapper[4844]: I0126 12:48:41.370931 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8c26dd28ba447a18c4c152bede52d8b5868407e037a822530a5ab378e2c72d7d"} Jan 26 12:48:41 crc kubenswrapper[4844]: I0126 12:48:41.370993 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bcd9e24f8773c9fee254e88ac20110ebceafe664cb38e9761c394200a0fe5195"} Jan 26 12:48:41 crc kubenswrapper[4844]: I0126 12:48:41.371011 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1e2eccd9780b6c6626b697ae85c6afd96f0652a57880036c4e7080b55472d2ac"} Jan 26 12:48:41 crc kubenswrapper[4844]: I0126 12:48:41.371290 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:48:41 crc kubenswrapper[4844]: I0126 12:48:41.371341 4844 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aecfc1fc-7b8c-42f4-9a7b-058a0acc9534" Jan 26 12:48:41 crc kubenswrapper[4844]: I0126 12:48:41.371360 4844 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aecfc1fc-7b8c-42f4-9a7b-058a0acc9534" Jan 26 12:48:41 crc kubenswrapper[4844]: I0126 12:48:41.378320 4844 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:42 crc kubenswrapper[4844]: I0126 12:48:42.332782 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:42 crc kubenswrapper[4844]: I0126 12:48:42.333040 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:42 crc kubenswrapper[4844]: I0126 12:48:42.339851 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:42 crc kubenswrapper[4844]: I0126 12:48:42.379398 4844 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aecfc1fc-7b8c-42f4-9a7b-058a0acc9534" Jan 26 12:48:42 crc kubenswrapper[4844]: I0126 12:48:42.379440 4844 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aecfc1fc-7b8c-42f4-9a7b-058a0acc9534" Jan 26 12:48:42 crc kubenswrapper[4844]: I0126 12:48:42.379497 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:42 crc kubenswrapper[4844]: I0126 12:48:42.384212 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:43 crc kubenswrapper[4844]: I0126 12:48:43.201892 4844 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 12:48:43 crc kubenswrapper[4844]: I0126 12:48:43.392055 4844 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aecfc1fc-7b8c-42f4-9a7b-058a0acc9534" Jan 26 12:48:43 crc kubenswrapper[4844]: I0126 12:48:43.392088 4844 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aecfc1fc-7b8c-42f4-9a7b-058a0acc9534" Jan 26 12:48:45 crc kubenswrapper[4844]: I0126 12:48:45.022920 4844 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="913de8dc-5d43-40ab-8218-d930d32c0b06" Jan 26 12:48:54 crc kubenswrapper[4844]: I0126 12:48:54.764898 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 12:48:54 crc kubenswrapper[4844]: I0126 12:48:54.935715 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 12:48:55 crc kubenswrapper[4844]: I0126 12:48:55.895976 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 12:48:56 crc kubenswrapper[4844]: I0126 12:48:56.278582 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 12:48:56 crc kubenswrapper[4844]: I0126 12:48:56.280432 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 12:48:56 crc kubenswrapper[4844]: I0126 12:48:56.464103 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 12:48:56 crc kubenswrapper[4844]: I0126 12:48:56.512625 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 12:48:56 crc kubenswrapper[4844]: I0126 12:48:56.592404 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 12:48:56 crc kubenswrapper[4844]: I0126 12:48:56.633725 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 12:48:56 crc kubenswrapper[4844]: I0126 12:48:56.654678 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 12:48:56 crc kubenswrapper[4844]: I0126 12:48:56.689214 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 12:48:56 crc kubenswrapper[4844]: I0126 12:48:56.825034 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 12:48:56 crc kubenswrapper[4844]: I0126 12:48:56.989382 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 12:48:57 crc kubenswrapper[4844]: I0126 12:48:57.042230 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 12:48:57 crc kubenswrapper[4844]: I0126 12:48:57.080676 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 12:48:57 crc kubenswrapper[4844]: I0126 12:48:57.356381 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 12:48:57 crc kubenswrapper[4844]: I0126 12:48:57.363106 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 12:48:57 crc kubenswrapper[4844]: I0126 12:48:57.410954 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 12:48:57 crc kubenswrapper[4844]: I0126 12:48:57.784294 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 12:48:57 crc kubenswrapper[4844]: I0126 12:48:57.926809 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 12:48:57 crc kubenswrapper[4844]: I0126 12:48:57.976034 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 12:48:58 crc kubenswrapper[4844]: I0126 12:48:58.211185 4844 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 12:48:58 crc kubenswrapper[4844]: I0126 12:48:58.260066 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 12:48:58 crc kubenswrapper[4844]: I0126 12:48:58.414017 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 12:48:58 crc kubenswrapper[4844]: I0126 12:48:58.420865 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 12:48:58 crc kubenswrapper[4844]: I0126 12:48:58.444836 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 12:48:58 crc kubenswrapper[4844]: I0126 12:48:58.606114 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 12:48:58 crc kubenswrapper[4844]: I0126 12:48:58.609879 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 12:48:58 crc kubenswrapper[4844]: I0126 12:48:58.626224 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 12:48:58 crc kubenswrapper[4844]: I0126 12:48:58.709707 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 12:48:58 crc kubenswrapper[4844]: I0126 12:48:58.799080 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 12:48:58 crc kubenswrapper[4844]: I0126 12:48:58.803153 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 12:48:58 crc kubenswrapper[4844]: I0126 12:48:58.846995 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 12:48:58 crc kubenswrapper[4844]: I0126 12:48:58.921193 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 12:48:58 crc kubenswrapper[4844]: I0126 12:48:58.946684 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.104078 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.125215 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.186414 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.214866 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.284293 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.288743 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.345366 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.346671 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.412482 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.439296 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.473540 4844 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.476432 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=35.476410285 podStartE2EDuration="35.476410285s" podCreationTimestamp="2026-01-26 12:48:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:48:44.937952608 +0000 UTC m=+301.871320220" watchObservedRunningTime="2026-01-26 12:48:59.476410285 +0000 UTC m=+316.409777917" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.479637 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/redhat-operators-dn4m8"] Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.479724 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.490317 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.504195 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.504164141 podStartE2EDuration="18.504164141s" podCreationTimestamp="2026-01-26 12:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:48:59.502778507 +0000 UTC m=+316.436146209" watchObservedRunningTime="2026-01-26 12:48:59.504164141 +0000 UTC m=+316.437531793" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.567996 4844 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.590441 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.594343 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.729255 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.770762 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.790369 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.807349 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.838465 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 12:48:59 crc kubenswrapper[4844]: I0126 12:48:59.934430 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 12:49:00 crc kubenswrapper[4844]: I0126 12:49:00.070654 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 12:49:00 crc kubenswrapper[4844]: I0126 12:49:00.210917 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 12:49:00 crc kubenswrapper[4844]: I0126 12:49:00.285420 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 12:49:00 crc kubenswrapper[4844]: I0126 12:49:00.315991 4844 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 12:49:00 crc kubenswrapper[4844]: I0126 12:49:00.328390 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 12:49:00 crc kubenswrapper[4844]: I0126 12:49:00.377406 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 12:49:00 crc kubenswrapper[4844]: I0126 12:49:00.412449 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 12:49:00 crc kubenswrapper[4844]: I0126 12:49:00.477799 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 12:49:00 crc kubenswrapper[4844]: I0126 12:49:00.724502 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 12:49:00 crc kubenswrapper[4844]: I0126 12:49:00.775632 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 12:49:00 crc kubenswrapper[4844]: I0126 12:49:00.781277 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 12:49:00 crc kubenswrapper[4844]: I0126 12:49:00.789677 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 12:49:00 crc kubenswrapper[4844]: I0126 12:49:00.896791 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 12:49:00 crc kubenswrapper[4844]: I0126 12:49:00.931153 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 12:49:00 crc kubenswrapper[4844]: I0126 12:49:00.955214 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 12:49:00 crc kubenswrapper[4844]: I0126 12:49:00.974566 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 12:49:01 crc kubenswrapper[4844]: I0126 12:49:01.035593 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 12:49:01 crc kubenswrapper[4844]: I0126 12:49:01.064903 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 12:49:01 crc kubenswrapper[4844]: I0126 12:49:01.111458 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 12:49:01 crc kubenswrapper[4844]: I0126 12:49:01.166138 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 12:49:01 crc kubenswrapper[4844]: I0126 12:49:01.201082 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 12:49:01 crc kubenswrapper[4844]: I0126 12:49:01.230266 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 12:49:01 crc kubenswrapper[4844]: I0126 12:49:01.329904 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 12:49:01 crc kubenswrapper[4844]: I0126 12:49:01.336016 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" path="/var/lib/kubelet/pods/2a0ca290-d48e-4c46-8c36-1e414126c42f/volumes" Jan 26 12:49:01 crc kubenswrapper[4844]: I0126 12:49:01.343988 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 12:49:01 crc kubenswrapper[4844]: I0126 12:49:01.361484 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 12:49:01 crc kubenswrapper[4844]: I0126 12:49:01.366786 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 12:49:01 crc kubenswrapper[4844]: I0126 12:49:01.416318 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 12:49:01 crc kubenswrapper[4844]: I0126 12:49:01.433731 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 12:49:01 crc kubenswrapper[4844]: I0126 12:49:01.433779 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 12:49:01 crc kubenswrapper[4844]: I0126 12:49:01.601066 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 12:49:01 crc kubenswrapper[4844]: I0126 12:49:01.649559 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 12:49:01 crc kubenswrapper[4844]: I0126 12:49:01.816303 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 12:49:01 crc kubenswrapper[4844]: I0126 12:49:01.845166 4844 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.075265 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.090467 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.228467 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.232507 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.255289 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.262259 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.283225 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.283343 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.509718 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.517667 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.563471 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.662491 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.672750 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.719783 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.753908 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.790866 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.796246 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.816845 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.831621 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.876407 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.952823 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.966353 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 12:49:02 crc kubenswrapper[4844]: I0126 12:49:02.974272 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.120561 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.156389 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.205022 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.229143 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.274718 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.314975 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.338613 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.391921 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.418294 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.441857 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.473662 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.567088 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.727346 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.786618 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.801357 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.822751 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.844182 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.863154 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 12:49:03 crc kubenswrapper[4844]: I0126 12:49:03.931442 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 12:49:04 crc kubenswrapper[4844]: I0126 12:49:04.023692 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 12:49:04 crc kubenswrapper[4844]: I0126 12:49:04.154344 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 12:49:04 crc kubenswrapper[4844]: I0126 12:49:04.386387 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 12:49:04 crc kubenswrapper[4844]: I0126 12:49:04.434133 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 12:49:04 crc kubenswrapper[4844]: I0126 12:49:04.463338 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 12:49:04 crc kubenswrapper[4844]: I0126 12:49:04.471174 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 12:49:04 crc kubenswrapper[4844]: I0126 12:49:04.571425 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 12:49:04 crc kubenswrapper[4844]: I0126 12:49:04.575265 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 12:49:04 crc kubenswrapper[4844]: I0126 12:49:04.575722 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 12:49:04 crc kubenswrapper[4844]: I0126 12:49:04.685881 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 12:49:04 crc kubenswrapper[4844]: I0126 12:49:04.726068 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 12:49:04 crc kubenswrapper[4844]: I0126 12:49:04.728894 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 12:49:04 crc kubenswrapper[4844]: I0126 12:49:04.821206 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 12:49:04 crc kubenswrapper[4844]: I0126 12:49:04.840291 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 12:49:04 crc kubenswrapper[4844]: I0126 12:49:04.883448 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 12:49:04 crc kubenswrapper[4844]: I0126 12:49:04.934640 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 12:49:04 crc kubenswrapper[4844]: I0126 12:49:04.937524 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 12:49:04 crc kubenswrapper[4844]: I0126 12:49:04.985264 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 12:49:05 crc kubenswrapper[4844]: I0126 12:49:04.999988 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 12:49:05 crc kubenswrapper[4844]: I0126 12:49:05.103829 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 12:49:05 crc kubenswrapper[4844]: I0126 12:49:05.137400 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 12:49:05 crc kubenswrapper[4844]: I0126 12:49:05.141563 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 12:49:05 crc kubenswrapper[4844]: I0126 12:49:05.185641 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 12:49:05 crc kubenswrapper[4844]: I0126 12:49:05.375133 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 12:49:05 crc kubenswrapper[4844]: I0126 12:49:05.416550 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 12:49:05 crc kubenswrapper[4844]: I0126 12:49:05.468320 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 12:49:05 crc kubenswrapper[4844]: I0126 12:49:05.568022 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 12:49:05 crc kubenswrapper[4844]: I0126 12:49:05.582420 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 12:49:05 crc kubenswrapper[4844]: I0126 12:49:05.585363 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 12:49:05 crc kubenswrapper[4844]: I0126 12:49:05.809340 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 12:49:05 crc kubenswrapper[4844]: I0126 12:49:05.900738 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 12:49:05 crc kubenswrapper[4844]: I0126 12:49:05.904459 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 12:49:05 crc kubenswrapper[4844]: I0126 12:49:05.975671 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 12:49:06 crc kubenswrapper[4844]: I0126 12:49:06.269631 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 12:49:06 crc kubenswrapper[4844]: I0126 12:49:06.294875 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 12:49:06 crc kubenswrapper[4844]: I0126 12:49:06.341419 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 12:49:06 crc kubenswrapper[4844]: I0126 12:49:06.369317 4844 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 12:49:06 crc kubenswrapper[4844]: I0126 12:49:06.369587 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://5aa26b8e17d5c95e2b540b7cf1fafffdc854885737d228d00892f4c8f14a13fb" gracePeriod=5 Jan 26 12:49:06 crc kubenswrapper[4844]: I0126 12:49:06.370177 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 12:49:06 crc kubenswrapper[4844]: I0126 12:49:06.568963 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 12:49:06 crc kubenswrapper[4844]: I0126 12:49:06.606418 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 12:49:06 crc kubenswrapper[4844]: I0126 12:49:06.649369 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 12:49:06 crc kubenswrapper[4844]: I0126 12:49:06.659746 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 12:49:06 crc kubenswrapper[4844]: I0126 12:49:06.779304 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 12:49:06 crc kubenswrapper[4844]: I0126 12:49:06.856172 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 12:49:06 crc kubenswrapper[4844]: I0126 12:49:06.895750 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 12:49:06 crc kubenswrapper[4844]: I0126 12:49:06.907612 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 12:49:07 crc kubenswrapper[4844]: I0126 12:49:07.111473 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 12:49:07 crc kubenswrapper[4844]: I0126 12:49:07.153175 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 12:49:07 crc kubenswrapper[4844]: I0126 12:49:07.337298 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 12:49:07 crc kubenswrapper[4844]: I0126 12:49:07.344389 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 12:49:07 crc kubenswrapper[4844]: I0126 12:49:07.396543 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 12:49:07 crc kubenswrapper[4844]: I0126 12:49:07.430522 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 12:49:07 crc kubenswrapper[4844]: I0126 12:49:07.547013 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 12:49:07 crc kubenswrapper[4844]: I0126 12:49:07.555367 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 12:49:07 crc kubenswrapper[4844]: I0126 12:49:07.634807 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 12:49:07 crc kubenswrapper[4844]: I0126 12:49:07.783933 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 12:49:07 crc kubenswrapper[4844]: I0126 12:49:07.834360 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 12:49:07 crc kubenswrapper[4844]: I0126 12:49:07.923419 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 12:49:07 crc kubenswrapper[4844]: I0126 12:49:07.952789 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 12:49:07 crc kubenswrapper[4844]: I0126 12:49:07.977064 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.028512 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.068419 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.187993 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.215132 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.283040 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.283287 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.527071 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.539516 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.616641 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.698437 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.758700 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.811364 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.831563 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.839318 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.848840 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.854488 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.874887 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.879040 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.934105 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 12:49:08 crc kubenswrapper[4844]: I0126 12:49:08.957653 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 12:49:09 crc kubenswrapper[4844]: I0126 12:49:09.111971 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 12:49:09 crc kubenswrapper[4844]: I0126 12:49:09.124221 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 12:49:09 crc kubenswrapper[4844]: I0126 12:49:09.141706 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 12:49:09 crc kubenswrapper[4844]: I0126 12:49:09.199311 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 12:49:09 crc kubenswrapper[4844]: I0126 12:49:09.365586 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 12:49:09 crc kubenswrapper[4844]: I0126 12:49:09.368794 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 12:49:09 crc kubenswrapper[4844]: I0126 12:49:09.409444 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 12:49:09 crc kubenswrapper[4844]: I0126 12:49:09.499570 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 12:49:09 crc kubenswrapper[4844]: I0126 12:49:09.550953 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 12:49:09 crc kubenswrapper[4844]: I0126 12:49:09.594064 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 12:49:09 crc kubenswrapper[4844]: I0126 12:49:09.611920 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 12:49:09 crc kubenswrapper[4844]: I0126 12:49:09.755145 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 12:49:09 crc kubenswrapper[4844]: I0126 12:49:09.767590 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 12:49:09 crc kubenswrapper[4844]: I0126 12:49:09.853113 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 12:49:09 crc kubenswrapper[4844]: I0126 12:49:09.931053 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 12:49:09 crc kubenswrapper[4844]: I0126 12:49:09.946412 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 12:49:10 crc kubenswrapper[4844]: I0126 12:49:10.095722 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 12:49:10 crc kubenswrapper[4844]: I0126 12:49:10.104541 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 12:49:10 crc kubenswrapper[4844]: I0126 12:49:10.190319 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 12:49:10 crc kubenswrapper[4844]: I0126 12:49:10.241644 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 12:49:10 crc kubenswrapper[4844]: I0126 12:49:10.262742 4844 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 12:49:10 crc kubenswrapper[4844]: I0126 12:49:10.280378 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 12:49:10 crc kubenswrapper[4844]: I0126 12:49:10.468674 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 12:49:10 crc kubenswrapper[4844]: I0126 12:49:10.475348 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 12:49:10 crc kubenswrapper[4844]: I0126 12:49:10.507763 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 12:49:10 crc kubenswrapper[4844]: I0126 12:49:10.518533 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 12:49:10 crc kubenswrapper[4844]: I0126 12:49:10.685743 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 12:49:10 crc kubenswrapper[4844]: I0126 12:49:10.716529 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 12:49:10 crc kubenswrapper[4844]: I0126 12:49:10.718800 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 12:49:10 crc kubenswrapper[4844]: I0126 12:49:10.765268 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 12:49:10 crc kubenswrapper[4844]: I0126 12:49:10.781227 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 12:49:10 crc kubenswrapper[4844]: I0126 12:49:10.786858 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 12:49:10 crc kubenswrapper[4844]: I0126 12:49:10.941841 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 12:49:10 crc kubenswrapper[4844]: I0126 12:49:10.953287 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 12:49:11 crc kubenswrapper[4844]: I0126 12:49:11.046030 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 12:49:11 crc kubenswrapper[4844]: I0126 12:49:11.179253 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 12:49:11 crc kubenswrapper[4844]: I0126 12:49:11.331999 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 12:49:11 crc kubenswrapper[4844]: I0126 12:49:11.754169 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.017540 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.122190 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.186233 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.221188 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.384026 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.596397 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.596449 4844 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="5aa26b8e17d5c95e2b540b7cf1fafffdc854885737d228d00892f4c8f14a13fb" exitCode=137 Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.714505 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.714629 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.865459 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.865743 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.865783 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.865818 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.865895 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.865880 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.865880 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.865960 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.866074 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.866239 4844 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.866262 4844 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.866279 4844 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.866294 4844 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.874373 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:49:12 crc kubenswrapper[4844]: I0126 12:49:12.967082 4844 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:13 crc kubenswrapper[4844]: I0126 12:49:13.294624 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 12:49:13 crc kubenswrapper[4844]: I0126 12:49:13.324542 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 12:49:13 crc kubenswrapper[4844]: I0126 12:49:13.326257 4844 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 26 12:49:13 crc kubenswrapper[4844]: I0126 12:49:13.343538 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 12:49:13 crc kubenswrapper[4844]: I0126 12:49:13.343590 4844 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="89f14aba-346d-4095-83b1-270d41f6c7c3" Jan 26 12:49:13 crc kubenswrapper[4844]: I0126 12:49:13.352588 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 12:49:13 crc kubenswrapper[4844]: I0126 12:49:13.352671 4844 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="89f14aba-346d-4095-83b1-270d41f6c7c3" Jan 26 12:49:13 crc kubenswrapper[4844]: I0126 12:49:13.607438 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 12:49:13 crc kubenswrapper[4844]: I0126 12:49:13.607872 4844 scope.go:117] "RemoveContainer" containerID="5aa26b8e17d5c95e2b540b7cf1fafffdc854885737d228d00892f4c8f14a13fb" Jan 26 12:49:13 crc kubenswrapper[4844]: I0126 12:49:13.607955 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 12:49:35 crc kubenswrapper[4844]: I0126 12:49:35.390405 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rlnfh"] Jan 26 12:49:35 crc kubenswrapper[4844]: I0126 12:49:35.391185 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" podUID="03f3ecc3-1b4d-4016-bc08-2b29d1b03d63" containerName="controller-manager" containerID="cri-o://7ac67dd3568804ad7677521b855982b1b7a3496504dbac50e11b95737c4cac8a" gracePeriod=30 Jan 26 12:49:35 crc kubenswrapper[4844]: I0126 12:49:35.406036 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4"] Jan 26 12:49:35 crc kubenswrapper[4844]: I0126 12:49:35.406417 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" podUID="b21e7f91-3226-493e-bbfb-89b33296e74e" containerName="route-controller-manager" containerID="cri-o://3eaaa8d93d73a23ee10f80981fbfddf5bdeee6e89b8a5e1531d3379c4bd383a8" gracePeriod=30 Jan 26 12:49:36 crc kubenswrapper[4844]: I0126 12:49:36.765533 4844 generic.go:334] "Generic (PLEG): container finished" podID="03f3ecc3-1b4d-4016-bc08-2b29d1b03d63" containerID="7ac67dd3568804ad7677521b855982b1b7a3496504dbac50e11b95737c4cac8a" exitCode=0 Jan 26 12:49:36 crc kubenswrapper[4844]: I0126 12:49:36.765943 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" event={"ID":"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63","Type":"ContainerDied","Data":"7ac67dd3568804ad7677521b855982b1b7a3496504dbac50e11b95737c4cac8a"} Jan 26 12:49:36 crc kubenswrapper[4844]: I0126 12:49:36.769006 4844 generic.go:334] "Generic (PLEG): container finished" podID="b21e7f91-3226-493e-bbfb-89b33296e74e" containerID="3eaaa8d93d73a23ee10f80981fbfddf5bdeee6e89b8a5e1531d3379c4bd383a8" exitCode=0 Jan 26 12:49:36 crc kubenswrapper[4844]: I0126 12:49:36.769105 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" event={"ID":"b21e7f91-3226-493e-bbfb-89b33296e74e","Type":"ContainerDied","Data":"3eaaa8d93d73a23ee10f80981fbfddf5bdeee6e89b8a5e1531d3379c4bd383a8"} Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.775262 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" event={"ID":"b21e7f91-3226-493e-bbfb-89b33296e74e","Type":"ContainerDied","Data":"b2bd760e1173b6b082e854155bf7ce95ab95e14d2be93f563790828532165ec6"} Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.775574 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2bd760e1173b6b082e854155bf7ce95ab95e14d2be93f563790828532165ec6" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.779239 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" event={"ID":"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63","Type":"ContainerDied","Data":"d04ed1d6ffdc3a4919245dc5be84ea3c2b9f3627f238b4cb92e786056562adeb"} Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.779268 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d04ed1d6ffdc3a4919245dc5be84ea3c2b9f3627f238b4cb92e786056562adeb" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.794304 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.806317 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.824007 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l"] Jan 26 12:49:37 crc kubenswrapper[4844]: E0126 12:49:37.824272 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" containerName="extract-utilities" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.824296 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" containerName="extract-utilities" Jan 26 12:49:37 crc kubenswrapper[4844]: E0126 12:49:37.824316 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" containerName="extract-content" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.824329 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" containerName="extract-content" Jan 26 12:49:37 crc kubenswrapper[4844]: E0126 12:49:37.824344 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" containerName="registry-server" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.824355 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" containerName="registry-server" Jan 26 12:49:37 crc kubenswrapper[4844]: E0126 12:49:37.824371 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="940c9b8a-a28e-4fb7-be00-c2f6f4bba416" containerName="installer" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.824381 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="940c9b8a-a28e-4fb7-be00-c2f6f4bba416" containerName="installer" Jan 26 12:49:37 crc kubenswrapper[4844]: E0126 12:49:37.824448 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f3ecc3-1b4d-4016-bc08-2b29d1b03d63" containerName="controller-manager" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.824460 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f3ecc3-1b4d-4016-bc08-2b29d1b03d63" containerName="controller-manager" Jan 26 12:49:37 crc kubenswrapper[4844]: E0126 12:49:37.824476 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b21e7f91-3226-493e-bbfb-89b33296e74e" containerName="route-controller-manager" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.824487 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="b21e7f91-3226-493e-bbfb-89b33296e74e" containerName="route-controller-manager" Jan 26 12:49:37 crc kubenswrapper[4844]: E0126 12:49:37.824507 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.824516 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.824967 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.824994 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f3ecc3-1b4d-4016-bc08-2b29d1b03d63" containerName="controller-manager" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.825004 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a0ca290-d48e-4c46-8c36-1e414126c42f" containerName="registry-server" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.825017 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="940c9b8a-a28e-4fb7-be00-c2f6f4bba416" containerName="installer" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.825028 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="b21e7f91-3226-493e-bbfb-89b33296e74e" containerName="route-controller-manager" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.825415 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.861625 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l"] Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.910738 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgq6v\" (UniqueName: \"kubernetes.io/projected/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-kube-api-access-sgq6v\") pod \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.910810 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-serving-cert\") pod \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.910845 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b21e7f91-3226-493e-bbfb-89b33296e74e-config\") pod \"b21e7f91-3226-493e-bbfb-89b33296e74e\" (UID: \"b21e7f91-3226-493e-bbfb-89b33296e74e\") " Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.910884 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-config\") pod \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.910907 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b21e7f91-3226-493e-bbfb-89b33296e74e-client-ca\") pod \"b21e7f91-3226-493e-bbfb-89b33296e74e\" (UID: \"b21e7f91-3226-493e-bbfb-89b33296e74e\") " Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.910934 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-proxy-ca-bundles\") pod \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.910972 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-client-ca\") pod \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\" (UID: \"03f3ecc3-1b4d-4016-bc08-2b29d1b03d63\") " Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.911026 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn87w\" (UniqueName: \"kubernetes.io/projected/b21e7f91-3226-493e-bbfb-89b33296e74e-kube-api-access-mn87w\") pod \"b21e7f91-3226-493e-bbfb-89b33296e74e\" (UID: \"b21e7f91-3226-493e-bbfb-89b33296e74e\") " Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.911048 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b21e7f91-3226-493e-bbfb-89b33296e74e-serving-cert\") pod \"b21e7f91-3226-493e-bbfb-89b33296e74e\" (UID: \"b21e7f91-3226-493e-bbfb-89b33296e74e\") " Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.911234 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-client-ca\") pod \"route-controller-manager-57d4b786c5-zbb2l\" (UID: \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\") " pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.911278 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-config\") pod \"route-controller-manager-57d4b786c5-zbb2l\" (UID: \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\") " pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.911300 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-serving-cert\") pod \"route-controller-manager-57d4b786c5-zbb2l\" (UID: \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\") " pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.911329 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dltf\" (UniqueName: \"kubernetes.io/projected/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-kube-api-access-4dltf\") pod \"route-controller-manager-57d4b786c5-zbb2l\" (UID: \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\") " pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.911946 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "03f3ecc3-1b4d-4016-bc08-2b29d1b03d63" (UID: "03f3ecc3-1b4d-4016-bc08-2b29d1b03d63"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.912275 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b21e7f91-3226-493e-bbfb-89b33296e74e-client-ca" (OuterVolumeSpecName: "client-ca") pod "b21e7f91-3226-493e-bbfb-89b33296e74e" (UID: "b21e7f91-3226-493e-bbfb-89b33296e74e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.912347 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b21e7f91-3226-493e-bbfb-89b33296e74e-config" (OuterVolumeSpecName: "config") pod "b21e7f91-3226-493e-bbfb-89b33296e74e" (UID: "b21e7f91-3226-493e-bbfb-89b33296e74e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.912516 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-config" (OuterVolumeSpecName: "config") pod "03f3ecc3-1b4d-4016-bc08-2b29d1b03d63" (UID: "03f3ecc3-1b4d-4016-bc08-2b29d1b03d63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.912696 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-client-ca" (OuterVolumeSpecName: "client-ca") pod "03f3ecc3-1b4d-4016-bc08-2b29d1b03d63" (UID: "03f3ecc3-1b4d-4016-bc08-2b29d1b03d63"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.916517 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "03f3ecc3-1b4d-4016-bc08-2b29d1b03d63" (UID: "03f3ecc3-1b4d-4016-bc08-2b29d1b03d63"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.916790 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b21e7f91-3226-493e-bbfb-89b33296e74e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b21e7f91-3226-493e-bbfb-89b33296e74e" (UID: "b21e7f91-3226-493e-bbfb-89b33296e74e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.918674 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-kube-api-access-sgq6v" (OuterVolumeSpecName: "kube-api-access-sgq6v") pod "03f3ecc3-1b4d-4016-bc08-2b29d1b03d63" (UID: "03f3ecc3-1b4d-4016-bc08-2b29d1b03d63"). InnerVolumeSpecName "kube-api-access-sgq6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:49:37 crc kubenswrapper[4844]: I0126 12:49:37.922911 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b21e7f91-3226-493e-bbfb-89b33296e74e-kube-api-access-mn87w" (OuterVolumeSpecName: "kube-api-access-mn87w") pod "b21e7f91-3226-493e-bbfb-89b33296e74e" (UID: "b21e7f91-3226-493e-bbfb-89b33296e74e"). InnerVolumeSpecName "kube-api-access-mn87w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.012170 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dltf\" (UniqueName: \"kubernetes.io/projected/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-kube-api-access-4dltf\") pod \"route-controller-manager-57d4b786c5-zbb2l\" (UID: \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\") " pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.012317 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-client-ca\") pod \"route-controller-manager-57d4b786c5-zbb2l\" (UID: \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\") " pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.012371 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-config\") pod \"route-controller-manager-57d4b786c5-zbb2l\" (UID: \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\") " pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.012395 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-serving-cert\") pod \"route-controller-manager-57d4b786c5-zbb2l\" (UID: \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\") " pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.012432 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn87w\" (UniqueName: \"kubernetes.io/projected/b21e7f91-3226-493e-bbfb-89b33296e74e-kube-api-access-mn87w\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.012443 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b21e7f91-3226-493e-bbfb-89b33296e74e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.012453 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgq6v\" (UniqueName: \"kubernetes.io/projected/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-kube-api-access-sgq6v\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.012461 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.012470 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b21e7f91-3226-493e-bbfb-89b33296e74e-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.012478 4844 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b21e7f91-3226-493e-bbfb-89b33296e74e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.012486 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.012493 4844 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.012501 4844 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.015172 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-config\") pod \"route-controller-manager-57d4b786c5-zbb2l\" (UID: \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\") " pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.015379 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-client-ca\") pod \"route-controller-manager-57d4b786c5-zbb2l\" (UID: \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\") " pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.017738 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-serving-cert\") pod \"route-controller-manager-57d4b786c5-zbb2l\" (UID: \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\") " pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.031619 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dltf\" (UniqueName: \"kubernetes.io/projected/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-kube-api-access-4dltf\") pod \"route-controller-manager-57d4b786c5-zbb2l\" (UID: \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\") " pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.147397 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.327926 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l"] Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.784561 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" event={"ID":"d073e682-e22b-43c7-9f3d-a04e49a8f1f3","Type":"ContainerStarted","Data":"285982644c0bb9fad54a21e5a6713e5f26d85ba7919f0ca37905685d719ca9e8"} Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.784910 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" event={"ID":"d073e682-e22b-43c7-9f3d-a04e49a8f1f3","Type":"ContainerStarted","Data":"27465e3c64d42a7cb3cd2d5eb1f1aa02051dc004651d29585387121d983f8346"} Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.784663 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rlnfh" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.784622 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.785029 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.793074 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.811450 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" podStartSLOduration=3.811434262 podStartE2EDuration="3.811434262s" podCreationTimestamp="2026-01-26 12:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:49:38.810710684 +0000 UTC m=+355.744078296" watchObservedRunningTime="2026-01-26 12:49:38.811434262 +0000 UTC m=+355.744801874" Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.826573 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4"] Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.837782 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-f5gx4"] Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.846288 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rlnfh"] Jan 26 12:49:38 crc kubenswrapper[4844]: I0126 12:49:38.849908 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rlnfh"] Jan 26 12:49:39 crc kubenswrapper[4844]: I0126 12:49:39.319883 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03f3ecc3-1b4d-4016-bc08-2b29d1b03d63" path="/var/lib/kubelet/pods/03f3ecc3-1b4d-4016-bc08-2b29d1b03d63/volumes" Jan 26 12:49:39 crc kubenswrapper[4844]: I0126 12:49:39.320704 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b21e7f91-3226-493e-bbfb-89b33296e74e" path="/var/lib/kubelet/pods/b21e7f91-3226-493e-bbfb-89b33296e74e/volumes" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.755851 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp"] Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.756964 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.759841 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.760086 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.760244 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.760834 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.761124 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.764584 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.770233 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.779534 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp"] Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.850098 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da0b6423-078b-4291-bbdf-e35a4a0a54c4-serving-cert\") pod \"controller-manager-5f4c649f4c-fs7hp\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.850141 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da0b6423-078b-4291-bbdf-e35a4a0a54c4-proxy-ca-bundles\") pod \"controller-manager-5f4c649f4c-fs7hp\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.850182 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgdv7\" (UniqueName: \"kubernetes.io/projected/da0b6423-078b-4291-bbdf-e35a4a0a54c4-kube-api-access-pgdv7\") pod \"controller-manager-5f4c649f4c-fs7hp\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.850229 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da0b6423-078b-4291-bbdf-e35a4a0a54c4-config\") pod \"controller-manager-5f4c649f4c-fs7hp\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.850302 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da0b6423-078b-4291-bbdf-e35a4a0a54c4-client-ca\") pod \"controller-manager-5f4c649f4c-fs7hp\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.951692 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgdv7\" (UniqueName: \"kubernetes.io/projected/da0b6423-078b-4291-bbdf-e35a4a0a54c4-kube-api-access-pgdv7\") pod \"controller-manager-5f4c649f4c-fs7hp\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.951765 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da0b6423-078b-4291-bbdf-e35a4a0a54c4-config\") pod \"controller-manager-5f4c649f4c-fs7hp\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.951806 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da0b6423-078b-4291-bbdf-e35a4a0a54c4-client-ca\") pod \"controller-manager-5f4c649f4c-fs7hp\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.951835 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da0b6423-078b-4291-bbdf-e35a4a0a54c4-serving-cert\") pod \"controller-manager-5f4c649f4c-fs7hp\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.951850 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da0b6423-078b-4291-bbdf-e35a4a0a54c4-proxy-ca-bundles\") pod \"controller-manager-5f4c649f4c-fs7hp\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.953199 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da0b6423-078b-4291-bbdf-e35a4a0a54c4-proxy-ca-bundles\") pod \"controller-manager-5f4c649f4c-fs7hp\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.953248 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da0b6423-078b-4291-bbdf-e35a4a0a54c4-client-ca\") pod \"controller-manager-5f4c649f4c-fs7hp\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.953820 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da0b6423-078b-4291-bbdf-e35a4a0a54c4-config\") pod \"controller-manager-5f4c649f4c-fs7hp\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.964836 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da0b6423-078b-4291-bbdf-e35a4a0a54c4-serving-cert\") pod \"controller-manager-5f4c649f4c-fs7hp\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:40 crc kubenswrapper[4844]: I0126 12:49:40.978463 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgdv7\" (UniqueName: \"kubernetes.io/projected/da0b6423-078b-4291-bbdf-e35a4a0a54c4-kube-api-access-pgdv7\") pod \"controller-manager-5f4c649f4c-fs7hp\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:41 crc kubenswrapper[4844]: I0126 12:49:41.093949 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:41 crc kubenswrapper[4844]: I0126 12:49:41.541737 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp"] Jan 26 12:49:41 crc kubenswrapper[4844]: W0126 12:49:41.549172 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda0b6423_078b_4291_bbdf_e35a4a0a54c4.slice/crio-632559d33c9c632b72c9af6724e4430d96e1737af51c3b495e0b5ef1cc29a4a9 WatchSource:0}: Error finding container 632559d33c9c632b72c9af6724e4430d96e1737af51c3b495e0b5ef1cc29a4a9: Status 404 returned error can't find the container with id 632559d33c9c632b72c9af6724e4430d96e1737af51c3b495e0b5ef1cc29a4a9 Jan 26 12:49:41 crc kubenswrapper[4844]: I0126 12:49:41.807177 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" event={"ID":"da0b6423-078b-4291-bbdf-e35a4a0a54c4","Type":"ContainerStarted","Data":"37bc66ec3f779e906cfdd8342cd62198c99adb337c1004f843b837aff343d6e5"} Jan 26 12:49:41 crc kubenswrapper[4844]: I0126 12:49:41.808427 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:41 crc kubenswrapper[4844]: I0126 12:49:41.808501 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" event={"ID":"da0b6423-078b-4291-bbdf-e35a4a0a54c4","Type":"ContainerStarted","Data":"632559d33c9c632b72c9af6724e4430d96e1737af51c3b495e0b5ef1cc29a4a9"} Jan 26 12:49:41 crc kubenswrapper[4844]: I0126 12:49:41.812447 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:49:41 crc kubenswrapper[4844]: I0126 12:49:41.829535 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" podStartSLOduration=6.829510722 podStartE2EDuration="6.829510722s" podCreationTimestamp="2026-01-26 12:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:49:41.825502844 +0000 UTC m=+358.758870466" watchObservedRunningTime="2026-01-26 12:49:41.829510722 +0000 UTC m=+358.762878344" Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.309762 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l"] Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.310576 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" podUID="d073e682-e22b-43c7-9f3d-a04e49a8f1f3" containerName="route-controller-manager" containerID="cri-o://285982644c0bb9fad54a21e5a6713e5f26d85ba7919f0ca37905685d719ca9e8" gracePeriod=30 Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.755394 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.840407 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-config\") pod \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\" (UID: \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\") " Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.840473 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-serving-cert\") pod \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\" (UID: \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\") " Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.840544 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dltf\" (UniqueName: \"kubernetes.io/projected/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-kube-api-access-4dltf\") pod \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\" (UID: \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\") " Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.840588 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-client-ca\") pod \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\" (UID: \"d073e682-e22b-43c7-9f3d-a04e49a8f1f3\") " Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.841311 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-client-ca" (OuterVolumeSpecName: "client-ca") pod "d073e682-e22b-43c7-9f3d-a04e49a8f1f3" (UID: "d073e682-e22b-43c7-9f3d-a04e49a8f1f3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.841912 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-config" (OuterVolumeSpecName: "config") pod "d073e682-e22b-43c7-9f3d-a04e49a8f1f3" (UID: "d073e682-e22b-43c7-9f3d-a04e49a8f1f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.845379 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d073e682-e22b-43c7-9f3d-a04e49a8f1f3" (UID: "d073e682-e22b-43c7-9f3d-a04e49a8f1f3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.845385 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-kube-api-access-4dltf" (OuterVolumeSpecName: "kube-api-access-4dltf") pod "d073e682-e22b-43c7-9f3d-a04e49a8f1f3" (UID: "d073e682-e22b-43c7-9f3d-a04e49a8f1f3"). InnerVolumeSpecName "kube-api-access-4dltf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.902702 4844 generic.go:334] "Generic (PLEG): container finished" podID="d073e682-e22b-43c7-9f3d-a04e49a8f1f3" containerID="285982644c0bb9fad54a21e5a6713e5f26d85ba7919f0ca37905685d719ca9e8" exitCode=0 Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.902745 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" event={"ID":"d073e682-e22b-43c7-9f3d-a04e49a8f1f3","Type":"ContainerDied","Data":"285982644c0bb9fad54a21e5a6713e5f26d85ba7919f0ca37905685d719ca9e8"} Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.902771 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.902786 4844 scope.go:117] "RemoveContainer" containerID="285982644c0bb9fad54a21e5a6713e5f26d85ba7919f0ca37905685d719ca9e8" Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.902776 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l" event={"ID":"d073e682-e22b-43c7-9f3d-a04e49a8f1f3","Type":"ContainerDied","Data":"27465e3c64d42a7cb3cd2d5eb1f1aa02051dc004651d29585387121d983f8346"} Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.917207 4844 scope.go:117] "RemoveContainer" containerID="285982644c0bb9fad54a21e5a6713e5f26d85ba7919f0ca37905685d719ca9e8" Jan 26 12:49:55 crc kubenswrapper[4844]: E0126 12:49:55.917774 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"285982644c0bb9fad54a21e5a6713e5f26d85ba7919f0ca37905685d719ca9e8\": container with ID starting with 285982644c0bb9fad54a21e5a6713e5f26d85ba7919f0ca37905685d719ca9e8 not found: ID does not exist" containerID="285982644c0bb9fad54a21e5a6713e5f26d85ba7919f0ca37905685d719ca9e8" Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.917855 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"285982644c0bb9fad54a21e5a6713e5f26d85ba7919f0ca37905685d719ca9e8"} err="failed to get container status \"285982644c0bb9fad54a21e5a6713e5f26d85ba7919f0ca37905685d719ca9e8\": rpc error: code = NotFound desc = could not find container \"285982644c0bb9fad54a21e5a6713e5f26d85ba7919f0ca37905685d719ca9e8\": container with ID starting with 285982644c0bb9fad54a21e5a6713e5f26d85ba7919f0ca37905685d719ca9e8 not found: ID does not exist" Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.943310 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dltf\" (UniqueName: \"kubernetes.io/projected/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-kube-api-access-4dltf\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.943350 4844 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.946011 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.946035 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d073e682-e22b-43c7-9f3d-a04e49a8f1f3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.946487 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l"] Jan 26 12:49:55 crc kubenswrapper[4844]: I0126 12:49:55.950566 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57d4b786c5-zbb2l"] Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.766623 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2"] Jan 26 12:49:56 crc kubenswrapper[4844]: E0126 12:49:56.767162 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d073e682-e22b-43c7-9f3d-a04e49a8f1f3" containerName="route-controller-manager" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.767178 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d073e682-e22b-43c7-9f3d-a04e49a8f1f3" containerName="route-controller-manager" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.767307 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="d073e682-e22b-43c7-9f3d-a04e49a8f1f3" containerName="route-controller-manager" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.767790 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.772007 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.772030 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.772951 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.779399 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.779682 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.782499 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.784423 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2"] Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.856297 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68859ffd-a8de-45f0-90f2-642f33717a87-client-ca\") pod \"route-controller-manager-85b99c9b7d-5f5m2\" (UID: \"68859ffd-a8de-45f0-90f2-642f33717a87\") " pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.856493 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68859ffd-a8de-45f0-90f2-642f33717a87-config\") pod \"route-controller-manager-85b99c9b7d-5f5m2\" (UID: \"68859ffd-a8de-45f0-90f2-642f33717a87\") " pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.856551 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2tzx\" (UniqueName: \"kubernetes.io/projected/68859ffd-a8de-45f0-90f2-642f33717a87-kube-api-access-r2tzx\") pod \"route-controller-manager-85b99c9b7d-5f5m2\" (UID: \"68859ffd-a8de-45f0-90f2-642f33717a87\") " pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.856859 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68859ffd-a8de-45f0-90f2-642f33717a87-serving-cert\") pod \"route-controller-manager-85b99c9b7d-5f5m2\" (UID: \"68859ffd-a8de-45f0-90f2-642f33717a87\") " pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.958200 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68859ffd-a8de-45f0-90f2-642f33717a87-serving-cert\") pod \"route-controller-manager-85b99c9b7d-5f5m2\" (UID: \"68859ffd-a8de-45f0-90f2-642f33717a87\") " pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.958387 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68859ffd-a8de-45f0-90f2-642f33717a87-client-ca\") pod \"route-controller-manager-85b99c9b7d-5f5m2\" (UID: \"68859ffd-a8de-45f0-90f2-642f33717a87\") " pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.958443 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68859ffd-a8de-45f0-90f2-642f33717a87-config\") pod \"route-controller-manager-85b99c9b7d-5f5m2\" (UID: \"68859ffd-a8de-45f0-90f2-642f33717a87\") " pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.958489 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2tzx\" (UniqueName: \"kubernetes.io/projected/68859ffd-a8de-45f0-90f2-642f33717a87-kube-api-access-r2tzx\") pod \"route-controller-manager-85b99c9b7d-5f5m2\" (UID: \"68859ffd-a8de-45f0-90f2-642f33717a87\") " pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.960035 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68859ffd-a8de-45f0-90f2-642f33717a87-client-ca\") pod \"route-controller-manager-85b99c9b7d-5f5m2\" (UID: \"68859ffd-a8de-45f0-90f2-642f33717a87\") " pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.960319 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68859ffd-a8de-45f0-90f2-642f33717a87-config\") pod \"route-controller-manager-85b99c9b7d-5f5m2\" (UID: \"68859ffd-a8de-45f0-90f2-642f33717a87\") " pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.972969 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68859ffd-a8de-45f0-90f2-642f33717a87-serving-cert\") pod \"route-controller-manager-85b99c9b7d-5f5m2\" (UID: \"68859ffd-a8de-45f0-90f2-642f33717a87\") " pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" Jan 26 12:49:56 crc kubenswrapper[4844]: I0126 12:49:56.984431 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2tzx\" (UniqueName: \"kubernetes.io/projected/68859ffd-a8de-45f0-90f2-642f33717a87-kube-api-access-r2tzx\") pod \"route-controller-manager-85b99c9b7d-5f5m2\" (UID: \"68859ffd-a8de-45f0-90f2-642f33717a87\") " pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" Jan 26 12:49:57 crc kubenswrapper[4844]: I0126 12:49:57.087984 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" Jan 26 12:49:57 crc kubenswrapper[4844]: I0126 12:49:57.319809 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d073e682-e22b-43c7-9f3d-a04e49a8f1f3" path="/var/lib/kubelet/pods/d073e682-e22b-43c7-9f3d-a04e49a8f1f3/volumes" Jan 26 12:49:57 crc kubenswrapper[4844]: I0126 12:49:57.517934 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2"] Jan 26 12:49:57 crc kubenswrapper[4844]: W0126 12:49:57.526243 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68859ffd_a8de_45f0_90f2_642f33717a87.slice/crio-fa4e8b0868da4d7e768a61893217c26bec8ffba9fe9e3338d4edde893e6bb4fd WatchSource:0}: Error finding container fa4e8b0868da4d7e768a61893217c26bec8ffba9fe9e3338d4edde893e6bb4fd: Status 404 returned error can't find the container with id fa4e8b0868da4d7e768a61893217c26bec8ffba9fe9e3338d4edde893e6bb4fd Jan 26 12:49:57 crc kubenswrapper[4844]: I0126 12:49:57.918844 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" event={"ID":"68859ffd-a8de-45f0-90f2-642f33717a87","Type":"ContainerStarted","Data":"721a29ed159e88ceae2f1201f5e4fd032e60bb85b32c7cb3fcffa559c515fe94"} Jan 26 12:49:57 crc kubenswrapper[4844]: I0126 12:49:57.919172 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" event={"ID":"68859ffd-a8de-45f0-90f2-642f33717a87","Type":"ContainerStarted","Data":"fa4e8b0868da4d7e768a61893217c26bec8ffba9fe9e3338d4edde893e6bb4fd"} Jan 26 12:49:57 crc kubenswrapper[4844]: I0126 12:49:57.919643 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" Jan 26 12:49:57 crc kubenswrapper[4844]: I0126 12:49:57.937718 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" podStartSLOduration=2.937703601 podStartE2EDuration="2.937703601s" podCreationTimestamp="2026-01-26 12:49:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:49:57.933823916 +0000 UTC m=+374.867191528" watchObservedRunningTime="2026-01-26 12:49:57.937703601 +0000 UTC m=+374.871071213" Jan 26 12:49:58 crc kubenswrapper[4844]: I0126 12:49:58.046764 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" Jan 26 12:50:06 crc kubenswrapper[4844]: I0126 12:50:06.364665 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:50:06 crc kubenswrapper[4844]: I0126 12:50:06.365407 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.325354 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp"] Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.326178 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" podUID="da0b6423-078b-4291-bbdf-e35a4a0a54c4" containerName="controller-manager" containerID="cri-o://37bc66ec3f779e906cfdd8342cd62198c99adb337c1004f843b837aff343d6e5" gracePeriod=30 Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.796518 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.915075 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da0b6423-078b-4291-bbdf-e35a4a0a54c4-serving-cert\") pod \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.915180 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da0b6423-078b-4291-bbdf-e35a4a0a54c4-config\") pod \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.915204 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da0b6423-078b-4291-bbdf-e35a4a0a54c4-client-ca\") pod \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.915246 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da0b6423-078b-4291-bbdf-e35a4a0a54c4-proxy-ca-bundles\") pod \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.915270 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgdv7\" (UniqueName: \"kubernetes.io/projected/da0b6423-078b-4291-bbdf-e35a4a0a54c4-kube-api-access-pgdv7\") pod \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\" (UID: \"da0b6423-078b-4291-bbdf-e35a4a0a54c4\") " Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.916369 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da0b6423-078b-4291-bbdf-e35a4a0a54c4-config" (OuterVolumeSpecName: "config") pod "da0b6423-078b-4291-bbdf-e35a4a0a54c4" (UID: "da0b6423-078b-4291-bbdf-e35a4a0a54c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.916647 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da0b6423-078b-4291-bbdf-e35a4a0a54c4-client-ca" (OuterVolumeSpecName: "client-ca") pod "da0b6423-078b-4291-bbdf-e35a4a0a54c4" (UID: "da0b6423-078b-4291-bbdf-e35a4a0a54c4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.916988 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da0b6423-078b-4291-bbdf-e35a4a0a54c4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "da0b6423-078b-4291-bbdf-e35a4a0a54c4" (UID: "da0b6423-078b-4291-bbdf-e35a4a0a54c4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.924844 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da0b6423-078b-4291-bbdf-e35a4a0a54c4-kube-api-access-pgdv7" (OuterVolumeSpecName: "kube-api-access-pgdv7") pod "da0b6423-078b-4291-bbdf-e35a4a0a54c4" (UID: "da0b6423-078b-4291-bbdf-e35a4a0a54c4"). InnerVolumeSpecName "kube-api-access-pgdv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.936770 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0b6423-078b-4291-bbdf-e35a4a0a54c4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "da0b6423-078b-4291-bbdf-e35a4a0a54c4" (UID: "da0b6423-078b-4291-bbdf-e35a4a0a54c4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.969320 4844 generic.go:334] "Generic (PLEG): container finished" podID="da0b6423-078b-4291-bbdf-e35a4a0a54c4" containerID="37bc66ec3f779e906cfdd8342cd62198c99adb337c1004f843b837aff343d6e5" exitCode=0 Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.969374 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" event={"ID":"da0b6423-078b-4291-bbdf-e35a4a0a54c4","Type":"ContainerDied","Data":"37bc66ec3f779e906cfdd8342cd62198c99adb337c1004f843b837aff343d6e5"} Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.969412 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" event={"ID":"da0b6423-078b-4291-bbdf-e35a4a0a54c4","Type":"ContainerDied","Data":"632559d33c9c632b72c9af6724e4430d96e1737af51c3b495e0b5ef1cc29a4a9"} Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.969439 4844 scope.go:117] "RemoveContainer" containerID="37bc66ec3f779e906cfdd8342cd62198c99adb337c1004f843b837aff343d6e5" Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.969577 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp" Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.994760 4844 scope.go:117] "RemoveContainer" containerID="37bc66ec3f779e906cfdd8342cd62198c99adb337c1004f843b837aff343d6e5" Jan 26 12:50:15 crc kubenswrapper[4844]: E0126 12:50:15.995372 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37bc66ec3f779e906cfdd8342cd62198c99adb337c1004f843b837aff343d6e5\": container with ID starting with 37bc66ec3f779e906cfdd8342cd62198c99adb337c1004f843b837aff343d6e5 not found: ID does not exist" containerID="37bc66ec3f779e906cfdd8342cd62198c99adb337c1004f843b837aff343d6e5" Jan 26 12:50:15 crc kubenswrapper[4844]: I0126 12:50:15.995495 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37bc66ec3f779e906cfdd8342cd62198c99adb337c1004f843b837aff343d6e5"} err="failed to get container status \"37bc66ec3f779e906cfdd8342cd62198c99adb337c1004f843b837aff343d6e5\": rpc error: code = NotFound desc = could not find container \"37bc66ec3f779e906cfdd8342cd62198c99adb337c1004f843b837aff343d6e5\": container with ID starting with 37bc66ec3f779e906cfdd8342cd62198c99adb337c1004f843b837aff343d6e5 not found: ID does not exist" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.016215 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp"] Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.017205 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgdv7\" (UniqueName: \"kubernetes.io/projected/da0b6423-078b-4291-bbdf-e35a4a0a54c4-kube-api-access-pgdv7\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.017440 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da0b6423-078b-4291-bbdf-e35a4a0a54c4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.017637 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da0b6423-078b-4291-bbdf-e35a4a0a54c4-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.017668 4844 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da0b6423-078b-4291-bbdf-e35a4a0a54c4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.017687 4844 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da0b6423-078b-4291-bbdf-e35a4a0a54c4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.024239 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f4c649f4c-fs7hp"] Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.786184 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-565d46959-h92rb"] Jan 26 12:50:16 crc kubenswrapper[4844]: E0126 12:50:16.786716 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da0b6423-078b-4291-bbdf-e35a4a0a54c4" containerName="controller-manager" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.786740 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="da0b6423-078b-4291-bbdf-e35a4a0a54c4" containerName="controller-manager" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.786928 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="da0b6423-078b-4291-bbdf-e35a4a0a54c4" containerName="controller-manager" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.787590 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.792368 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.793232 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.793711 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.793770 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.794226 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.797997 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.799196 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-565d46959-h92rb"] Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.806858 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.930202 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgwvb\" (UniqueName: \"kubernetes.io/projected/67cef31a-df5a-4bb2-bcce-36643e5f1151-kube-api-access-zgwvb\") pod \"controller-manager-565d46959-h92rb\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.930407 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67cef31a-df5a-4bb2-bcce-36643e5f1151-proxy-ca-bundles\") pod \"controller-manager-565d46959-h92rb\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.930530 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67cef31a-df5a-4bb2-bcce-36643e5f1151-client-ca\") pod \"controller-manager-565d46959-h92rb\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.930573 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67cef31a-df5a-4bb2-bcce-36643e5f1151-serving-cert\") pod \"controller-manager-565d46959-h92rb\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:16 crc kubenswrapper[4844]: I0126 12:50:16.930642 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67cef31a-df5a-4bb2-bcce-36643e5f1151-config\") pod \"controller-manager-565d46959-h92rb\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:17 crc kubenswrapper[4844]: I0126 12:50:17.032114 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67cef31a-df5a-4bb2-bcce-36643e5f1151-client-ca\") pod \"controller-manager-565d46959-h92rb\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:17 crc kubenswrapper[4844]: I0126 12:50:17.032182 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67cef31a-df5a-4bb2-bcce-36643e5f1151-serving-cert\") pod \"controller-manager-565d46959-h92rb\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:17 crc kubenswrapper[4844]: I0126 12:50:17.032211 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67cef31a-df5a-4bb2-bcce-36643e5f1151-config\") pod \"controller-manager-565d46959-h92rb\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:17 crc kubenswrapper[4844]: I0126 12:50:17.032250 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgwvb\" (UniqueName: \"kubernetes.io/projected/67cef31a-df5a-4bb2-bcce-36643e5f1151-kube-api-access-zgwvb\") pod \"controller-manager-565d46959-h92rb\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:17 crc kubenswrapper[4844]: I0126 12:50:17.032307 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67cef31a-df5a-4bb2-bcce-36643e5f1151-proxy-ca-bundles\") pod \"controller-manager-565d46959-h92rb\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:17 crc kubenswrapper[4844]: I0126 12:50:17.033584 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67cef31a-df5a-4bb2-bcce-36643e5f1151-proxy-ca-bundles\") pod \"controller-manager-565d46959-h92rb\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:17 crc kubenswrapper[4844]: I0126 12:50:17.033842 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67cef31a-df5a-4bb2-bcce-36643e5f1151-config\") pod \"controller-manager-565d46959-h92rb\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:17 crc kubenswrapper[4844]: I0126 12:50:17.033921 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67cef31a-df5a-4bb2-bcce-36643e5f1151-client-ca\") pod \"controller-manager-565d46959-h92rb\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:17 crc kubenswrapper[4844]: I0126 12:50:17.038656 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67cef31a-df5a-4bb2-bcce-36643e5f1151-serving-cert\") pod \"controller-manager-565d46959-h92rb\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:17 crc kubenswrapper[4844]: I0126 12:50:17.060251 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgwvb\" (UniqueName: \"kubernetes.io/projected/67cef31a-df5a-4bb2-bcce-36643e5f1151-kube-api-access-zgwvb\") pod \"controller-manager-565d46959-h92rb\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:17 crc kubenswrapper[4844]: I0126 12:50:17.126028 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:17 crc kubenswrapper[4844]: I0126 12:50:17.320987 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da0b6423-078b-4291-bbdf-e35a4a0a54c4" path="/var/lib/kubelet/pods/da0b6423-078b-4291-bbdf-e35a4a0a54c4/volumes" Jan 26 12:50:17 crc kubenswrapper[4844]: I0126 12:50:17.619818 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-565d46959-h92rb"] Jan 26 12:50:17 crc kubenswrapper[4844]: I0126 12:50:17.988435 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-565d46959-h92rb" event={"ID":"67cef31a-df5a-4bb2-bcce-36643e5f1151","Type":"ContainerStarted","Data":"fb867795bbc5fa34f18f2532f8205853680309c01f2ff2ed87d4642558d8095a"} Jan 26 12:50:17 crc kubenswrapper[4844]: I0126 12:50:17.988476 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-565d46959-h92rb" event={"ID":"67cef31a-df5a-4bb2-bcce-36643e5f1151","Type":"ContainerStarted","Data":"f71768a81fa3d4f359173aa2b56dc7a1dca0ba6a25c01b382c8952a5ef1cb3fd"} Jan 26 12:50:17 crc kubenswrapper[4844]: I0126 12:50:17.988776 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:17 crc kubenswrapper[4844]: I0126 12:50:17.993896 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:50:18 crc kubenswrapper[4844]: I0126 12:50:18.054903 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-565d46959-h92rb" podStartSLOduration=3.054884642 podStartE2EDuration="3.054884642s" podCreationTimestamp="2026-01-26 12:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:50:18.028495538 +0000 UTC m=+394.961863160" watchObservedRunningTime="2026-01-26 12:50:18.054884642 +0000 UTC m=+394.988252244" Jan 26 12:50:36 crc kubenswrapper[4844]: I0126 12:50:36.364772 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:50:36 crc kubenswrapper[4844]: I0126 12:50:36.365364 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:51:06 crc kubenswrapper[4844]: I0126 12:51:06.364496 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:51:06 crc kubenswrapper[4844]: I0126 12:51:06.365190 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:51:06 crc kubenswrapper[4844]: I0126 12:51:06.365254 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:51:06 crc kubenswrapper[4844]: I0126 12:51:06.366013 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"259eaafa3e05165d5d7e0a880f0cf0745986b838a34c0b0ee82a10c9bd689fed"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 12:51:06 crc kubenswrapper[4844]: I0126 12:51:06.366095 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://259eaafa3e05165d5d7e0a880f0cf0745986b838a34c0b0ee82a10c9bd689fed" gracePeriod=600 Jan 26 12:51:07 crc kubenswrapper[4844]: I0126 12:51:07.348225 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="259eaafa3e05165d5d7e0a880f0cf0745986b838a34c0b0ee82a10c9bd689fed" exitCode=0 Jan 26 12:51:07 crc kubenswrapper[4844]: I0126 12:51:07.348301 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"259eaafa3e05165d5d7e0a880f0cf0745986b838a34c0b0ee82a10c9bd689fed"} Jan 26 12:51:07 crc kubenswrapper[4844]: I0126 12:51:07.348918 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"870bb25aabceda2c570f02a680633d239134094919db5f044b9434a64360288d"} Jan 26 12:51:07 crc kubenswrapper[4844]: I0126 12:51:07.348952 4844 scope.go:117] "RemoveContainer" containerID="8c964e0f9d13a855d738b028f8a2bed32fb23f4a05f0f0222a7d24e3222f44b2" Jan 26 12:51:50 crc kubenswrapper[4844]: I0126 12:51:50.016675 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmk5t"] Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.043984 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" podUID="e6a96cc6-703f-4104-8ff8-53c3cafb2227" containerName="oauth-openshift" containerID="cri-o://6ec5f0c11c305cb8ebe7ea97640489384b1218528df6e1ed3d79bb1aea4d78f0" gracePeriod=15 Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.754082 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.803327 4844 generic.go:334] "Generic (PLEG): container finished" podID="e6a96cc6-703f-4104-8ff8-53c3cafb2227" containerID="6ec5f0c11c305cb8ebe7ea97640489384b1218528df6e1ed3d79bb1aea4d78f0" exitCode=0 Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.803380 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" event={"ID":"e6a96cc6-703f-4104-8ff8-53c3cafb2227","Type":"ContainerDied","Data":"6ec5f0c11c305cb8ebe7ea97640489384b1218528df6e1ed3d79bb1aea4d78f0"} Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.803410 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" event={"ID":"e6a96cc6-703f-4104-8ff8-53c3cafb2227","Type":"ContainerDied","Data":"98c3de53b099ad3e627ba372ff3ee134253fdc07605c69e3e2acc5ba4d5889c9"} Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.803431 4844 scope.go:117] "RemoveContainer" containerID="6ec5f0c11c305cb8ebe7ea97640489384b1218528df6e1ed3d79bb1aea4d78f0" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.803551 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fmk5t" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.804445 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-846967c997-7njvr"] Jan 26 12:52:15 crc kubenswrapper[4844]: E0126 12:52:15.804831 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a96cc6-703f-4104-8ff8-53c3cafb2227" containerName="oauth-openshift" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.804868 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a96cc6-703f-4104-8ff8-53c3cafb2227" containerName="oauth-openshift" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.805132 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a96cc6-703f-4104-8ff8-53c3cafb2227" containerName="oauth-openshift" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.805971 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.827222 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-846967c997-7njvr"] Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.841413 4844 scope.go:117] "RemoveContainer" containerID="6ec5f0c11c305cb8ebe7ea97640489384b1218528df6e1ed3d79bb1aea4d78f0" Jan 26 12:52:15 crc kubenswrapper[4844]: E0126 12:52:15.842185 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ec5f0c11c305cb8ebe7ea97640489384b1218528df6e1ed3d79bb1aea4d78f0\": container with ID starting with 6ec5f0c11c305cb8ebe7ea97640489384b1218528df6e1ed3d79bb1aea4d78f0 not found: ID does not exist" containerID="6ec5f0c11c305cb8ebe7ea97640489384b1218528df6e1ed3d79bb1aea4d78f0" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.842269 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ec5f0c11c305cb8ebe7ea97640489384b1218528df6e1ed3d79bb1aea4d78f0"} err="failed to get container status \"6ec5f0c11c305cb8ebe7ea97640489384b1218528df6e1ed3d79bb1aea4d78f0\": rpc error: code = NotFound desc = could not find container \"6ec5f0c11c305cb8ebe7ea97640489384b1218528df6e1ed3d79bb1aea4d78f0\": container with ID starting with 6ec5f0c11c305cb8ebe7ea97640489384b1218528df6e1ed3d79bb1aea4d78f0 not found: ID does not exist" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.926373 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bcgw\" (UniqueName: \"kubernetes.io/projected/e6a96cc6-703f-4104-8ff8-53c3cafb2227-kube-api-access-6bcgw\") pod \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.926460 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-router-certs\") pod \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.926526 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-audit-policies\") pod \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.926564 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-ocp-branding-template\") pod \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.926669 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-template-provider-selection\") pod \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.926745 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-service-ca\") pod \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.926790 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-serving-cert\") pod \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928010 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-template-error\") pod \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928060 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e6a96cc6-703f-4104-8ff8-53c3cafb2227-audit-dir\") pod \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928106 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-template-login\") pod \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928144 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-cliconfig\") pod \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928144 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e6a96cc6-703f-4104-8ff8-53c3cafb2227" (UID: "e6a96cc6-703f-4104-8ff8-53c3cafb2227"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928185 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-trusted-ca-bundle\") pod \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928246 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-idp-0-file-data\") pod \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928287 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-session\") pod \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\" (UID: \"e6a96cc6-703f-4104-8ff8-53c3cafb2227\") " Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928480 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928522 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-serving-cert\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928507 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6a96cc6-703f-4104-8ff8-53c3cafb2227-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e6a96cc6-703f-4104-8ff8-53c3cafb2227" (UID: "e6a96cc6-703f-4104-8ff8-53c3cafb2227"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928556 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928652 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-router-certs\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928685 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928719 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f41db1b2-c62c-40a0-b86c-a6284a9351fa-audit-policies\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928753 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f41db1b2-c62c-40a0-b86c-a6284a9351fa-audit-dir\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928789 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-user-template-error\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928838 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-user-template-login\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928869 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928924 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-session\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.928968 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-service-ca\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.929022 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.929054 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8b7z\" (UniqueName: \"kubernetes.io/projected/f41db1b2-c62c-40a0-b86c-a6284a9351fa-kube-api-access-v8b7z\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.929110 4844 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.929131 4844 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e6a96cc6-703f-4104-8ff8-53c3cafb2227-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.929582 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e6a96cc6-703f-4104-8ff8-53c3cafb2227" (UID: "e6a96cc6-703f-4104-8ff8-53c3cafb2227"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.930905 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e6a96cc6-703f-4104-8ff8-53c3cafb2227" (UID: "e6a96cc6-703f-4104-8ff8-53c3cafb2227"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.931051 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e6a96cc6-703f-4104-8ff8-53c3cafb2227" (UID: "e6a96cc6-703f-4104-8ff8-53c3cafb2227"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.935334 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6a96cc6-703f-4104-8ff8-53c3cafb2227-kube-api-access-6bcgw" (OuterVolumeSpecName: "kube-api-access-6bcgw") pod "e6a96cc6-703f-4104-8ff8-53c3cafb2227" (UID: "e6a96cc6-703f-4104-8ff8-53c3cafb2227"). InnerVolumeSpecName "kube-api-access-6bcgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.939104 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e6a96cc6-703f-4104-8ff8-53c3cafb2227" (UID: "e6a96cc6-703f-4104-8ff8-53c3cafb2227"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.939258 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e6a96cc6-703f-4104-8ff8-53c3cafb2227" (UID: "e6a96cc6-703f-4104-8ff8-53c3cafb2227"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.939722 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e6a96cc6-703f-4104-8ff8-53c3cafb2227" (UID: "e6a96cc6-703f-4104-8ff8-53c3cafb2227"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.940135 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e6a96cc6-703f-4104-8ff8-53c3cafb2227" (UID: "e6a96cc6-703f-4104-8ff8-53c3cafb2227"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.940778 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e6a96cc6-703f-4104-8ff8-53c3cafb2227" (UID: "e6a96cc6-703f-4104-8ff8-53c3cafb2227"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.941137 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e6a96cc6-703f-4104-8ff8-53c3cafb2227" (UID: "e6a96cc6-703f-4104-8ff8-53c3cafb2227"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.941388 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e6a96cc6-703f-4104-8ff8-53c3cafb2227" (UID: "e6a96cc6-703f-4104-8ff8-53c3cafb2227"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:52:15 crc kubenswrapper[4844]: I0126 12:52:15.941831 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e6a96cc6-703f-4104-8ff8-53c3cafb2227" (UID: "e6a96cc6-703f-4104-8ff8-53c3cafb2227"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030008 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-user-template-login\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030085 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030124 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-session\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030155 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-service-ca\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030193 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030217 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8b7z\" (UniqueName: \"kubernetes.io/projected/f41db1b2-c62c-40a0-b86c-a6284a9351fa-kube-api-access-v8b7z\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030250 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030273 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-serving-cert\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030295 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030339 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-router-certs\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030359 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030381 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f41db1b2-c62c-40a0-b86c-a6284a9351fa-audit-policies\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030404 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f41db1b2-c62c-40a0-b86c-a6284a9351fa-audit-dir\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030429 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-user-template-error\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030484 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030499 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030517 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030535 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030549 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030562 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030579 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030618 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bcgw\" (UniqueName: \"kubernetes.io/projected/e6a96cc6-703f-4104-8ff8-53c3cafb2227-kube-api-access-6bcgw\") on node \"crc\" DevicePath \"\"" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030635 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030675 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030690 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.030704 4844 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e6a96cc6-703f-4104-8ff8-53c3cafb2227-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.031663 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.031911 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f41db1b2-c62c-40a0-b86c-a6284a9351fa-audit-dir\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.032552 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.032675 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f41db1b2-c62c-40a0-b86c-a6284a9351fa-audit-policies\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.032763 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-service-ca\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.034379 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-user-template-login\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.036399 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-serving-cert\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.036518 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-user-template-error\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.036799 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.036930 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.037065 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.037695 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-router-certs\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.038392 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f41db1b2-c62c-40a0-b86c-a6284a9351fa-v4-0-config-system-session\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.046529 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8b7z\" (UniqueName: \"kubernetes.io/projected/f41db1b2-c62c-40a0-b86c-a6284a9351fa-kube-api-access-v8b7z\") pod \"oauth-openshift-846967c997-7njvr\" (UID: \"f41db1b2-c62c-40a0-b86c-a6284a9351fa\") " pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.130527 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.158383 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmk5t"] Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.177491 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmk5t"] Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.599012 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-846967c997-7njvr"] Jan 26 12:52:16 crc kubenswrapper[4844]: W0126 12:52:16.609136 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf41db1b2_c62c_40a0_b86c_a6284a9351fa.slice/crio-4f1309deb56e283abadee5e066523e89022d73a09384c10c7659634c110616a2 WatchSource:0}: Error finding container 4f1309deb56e283abadee5e066523e89022d73a09384c10c7659634c110616a2: Status 404 returned error can't find the container with id 4f1309deb56e283abadee5e066523e89022d73a09384c10c7659634c110616a2 Jan 26 12:52:16 crc kubenswrapper[4844]: I0126 12:52:16.821038 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-846967c997-7njvr" event={"ID":"f41db1b2-c62c-40a0-b86c-a6284a9351fa","Type":"ContainerStarted","Data":"4f1309deb56e283abadee5e066523e89022d73a09384c10c7659634c110616a2"} Jan 26 12:52:17 crc kubenswrapper[4844]: I0126 12:52:17.327402 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6a96cc6-703f-4104-8ff8-53c3cafb2227" path="/var/lib/kubelet/pods/e6a96cc6-703f-4104-8ff8-53c3cafb2227/volumes" Jan 26 12:52:17 crc kubenswrapper[4844]: I0126 12:52:17.829804 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-846967c997-7njvr" event={"ID":"f41db1b2-c62c-40a0-b86c-a6284a9351fa","Type":"ContainerStarted","Data":"970fd939fac4f7ab5e218b1424aeeaffc4cf0de34c8e015e0b275466a84eeeba"} Jan 26 12:52:17 crc kubenswrapper[4844]: I0126 12:52:17.830049 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:17 crc kubenswrapper[4844]: I0126 12:52:17.835398 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-846967c997-7njvr" Jan 26 12:52:17 crc kubenswrapper[4844]: I0126 12:52:17.852251 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-846967c997-7njvr" podStartSLOduration=27.852228166 podStartE2EDuration="27.852228166s" podCreationTimestamp="2026-01-26 12:51:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:52:17.848284239 +0000 UTC m=+514.781651891" watchObservedRunningTime="2026-01-26 12:52:17.852228166 +0000 UTC m=+514.785595778" Jan 26 12:52:43 crc kubenswrapper[4844]: I0126 12:52:43.467399 4844 scope.go:117] "RemoveContainer" containerID="7ac67dd3568804ad7677521b855982b1b7a3496504dbac50e11b95737c4cac8a" Jan 26 12:52:43 crc kubenswrapper[4844]: I0126 12:52:43.499879 4844 scope.go:117] "RemoveContainer" containerID="938d678e79dc74debbf11928bc5ab4b890aebb43e137ea8049db0561bd0b2da2" Jan 26 12:52:43 crc kubenswrapper[4844]: I0126 12:52:43.529262 4844 scope.go:117] "RemoveContainer" containerID="3eaaa8d93d73a23ee10f80981fbfddf5bdeee6e89b8a5e1531d3379c4bd383a8" Jan 26 12:53:06 crc kubenswrapper[4844]: I0126 12:53:06.365280 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:53:06 crc kubenswrapper[4844]: I0126 12:53:06.366579 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:53:36 crc kubenswrapper[4844]: I0126 12:53:36.365518 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:53:36 crc kubenswrapper[4844]: I0126 12:53:36.366364 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:54:06 crc kubenswrapper[4844]: I0126 12:54:06.364952 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:54:06 crc kubenswrapper[4844]: I0126 12:54:06.365548 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:54:06 crc kubenswrapper[4844]: I0126 12:54:06.365667 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:54:06 crc kubenswrapper[4844]: I0126 12:54:06.366447 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"870bb25aabceda2c570f02a680633d239134094919db5f044b9434a64360288d"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 12:54:06 crc kubenswrapper[4844]: I0126 12:54:06.366531 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://870bb25aabceda2c570f02a680633d239134094919db5f044b9434a64360288d" gracePeriod=600 Jan 26 12:54:06 crc kubenswrapper[4844]: I0126 12:54:06.658300 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="870bb25aabceda2c570f02a680633d239134094919db5f044b9434a64360288d" exitCode=0 Jan 26 12:54:06 crc kubenswrapper[4844]: I0126 12:54:06.658347 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"870bb25aabceda2c570f02a680633d239134094919db5f044b9434a64360288d"} Jan 26 12:54:06 crc kubenswrapper[4844]: I0126 12:54:06.658867 4844 scope.go:117] "RemoveContainer" containerID="259eaafa3e05165d5d7e0a880f0cf0745986b838a34c0b0ee82a10c9bd689fed" Jan 26 12:54:07 crc kubenswrapper[4844]: I0126 12:54:07.669476 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"6036e032f544da01ca860cf2f64b83a1de4c715f98d7954c6a55f13c7ae044df"} Jan 26 12:54:43 crc kubenswrapper[4844]: I0126 12:54:43.624197 4844 scope.go:117] "RemoveContainer" containerID="507efa4e6b84e32ba2b163ab197dc86f23492f40d8223ecca927c1f8294538f3" Jan 26 12:54:43 crc kubenswrapper[4844]: I0126 12:54:43.660143 4844 scope.go:117] "RemoveContainer" containerID="52b24656eb293c56278e9835f8abc0ba0024bfe3e7c2b17e9337708f0558813f" Jan 26 12:56:06 crc kubenswrapper[4844]: I0126 12:56:06.365285 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:56:06 crc kubenswrapper[4844]: I0126 12:56:06.365979 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:56:36 crc kubenswrapper[4844]: I0126 12:56:36.364481 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:56:36 crc kubenswrapper[4844]: I0126 12:56:36.365141 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:56:43 crc kubenswrapper[4844]: I0126 12:56:43.485890 4844 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 12:57:06 crc kubenswrapper[4844]: I0126 12:57:06.364961 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:57:06 crc kubenswrapper[4844]: I0126 12:57:06.365751 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:57:06 crc kubenswrapper[4844]: I0126 12:57:06.365833 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 12:57:06 crc kubenswrapper[4844]: I0126 12:57:06.366518 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6036e032f544da01ca860cf2f64b83a1de4c715f98d7954c6a55f13c7ae044df"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 12:57:06 crc kubenswrapper[4844]: I0126 12:57:06.366649 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://6036e032f544da01ca860cf2f64b83a1de4c715f98d7954c6a55f13c7ae044df" gracePeriod=600 Jan 26 12:57:07 crc kubenswrapper[4844]: I0126 12:57:07.006480 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="6036e032f544da01ca860cf2f64b83a1de4c715f98d7954c6a55f13c7ae044df" exitCode=0 Jan 26 12:57:07 crc kubenswrapper[4844]: I0126 12:57:07.006997 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"6036e032f544da01ca860cf2f64b83a1de4c715f98d7954c6a55f13c7ae044df"} Jan 26 12:57:07 crc kubenswrapper[4844]: I0126 12:57:07.007158 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"168fb0438abc387a38960b9c5a893cdb9d7d45ce1d189f5af498314adae7a5ca"} Jan 26 12:57:07 crc kubenswrapper[4844]: I0126 12:57:07.007202 4844 scope.go:117] "RemoveContainer" containerID="870bb25aabceda2c570f02a680633d239134094919db5f044b9434a64360288d" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.217705 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-x2dvz"] Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.219939 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.234565 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-x2dvz"] Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.345173 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-bound-sa-token\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.345246 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-registry-tls\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.345270 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-trusted-ca\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.345449 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.345494 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-registry-certificates\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.345519 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.345652 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.345748 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hfd8\" (UniqueName: \"kubernetes.io/projected/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-kube-api-access-9hfd8\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.380535 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.446940 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.447017 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-registry-certificates\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.447047 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.447086 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hfd8\" (UniqueName: \"kubernetes.io/projected/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-kube-api-access-9hfd8\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.447138 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-bound-sa-token\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.447179 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-registry-tls\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.447197 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-trusted-ca\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.447953 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.449101 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-trusted-ca\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.449169 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-registry-certificates\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.457659 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.457792 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-registry-tls\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.465117 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-bound-sa-token\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.475543 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hfd8\" (UniqueName: \"kubernetes.io/projected/f75f54ce-5ee7-4904-8dee-cc9c6afba45a-kube-api-access-9hfd8\") pod \"image-registry-66df7c8f76-x2dvz\" (UID: \"f75f54ce-5ee7-4904-8dee-cc9c6afba45a\") " pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.547795 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:02 crc kubenswrapper[4844]: I0126 12:58:02.832165 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-x2dvz"] Jan 26 12:58:03 crc kubenswrapper[4844]: I0126 12:58:03.382968 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" event={"ID":"f75f54ce-5ee7-4904-8dee-cc9c6afba45a","Type":"ContainerStarted","Data":"503c68c10378379c2f0b2fb9f3fc9f980945232102869403837cfbc646a102b5"} Jan 26 12:58:03 crc kubenswrapper[4844]: I0126 12:58:03.383064 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" event={"ID":"f75f54ce-5ee7-4904-8dee-cc9c6afba45a","Type":"ContainerStarted","Data":"7c9f33ff786e275eab3abce8226497841b60ba4e717a30d1935afc9fadff4167"} Jan 26 12:58:03 crc kubenswrapper[4844]: I0126 12:58:03.383143 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:03 crc kubenswrapper[4844]: I0126 12:58:03.407905 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" podStartSLOduration=1.407883743 podStartE2EDuration="1.407883743s" podCreationTimestamp="2026-01-26 12:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:58:03.406275713 +0000 UTC m=+860.339643315" watchObservedRunningTime="2026-01-26 12:58:03.407883743 +0000 UTC m=+860.341251375" Jan 26 12:58:22 crc kubenswrapper[4844]: I0126 12:58:22.557557 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-x2dvz" Jan 26 12:58:22 crc kubenswrapper[4844]: I0126 12:58:22.630714 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dwwm9"] Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.489659 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-982kx"] Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.490920 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-982kx" podUID="1b7b1cea-f94c-4750-8db8-18d9b7f9fb70" containerName="registry-server" containerID="cri-o://75837018c1ec0a5f226f3ed48de9b1c248d7aecb4fdbaec9bf992ef3130dcd21" gracePeriod=30 Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.504553 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lhjls"] Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.504868 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lhjls" podUID="a37a9c59-7c20-4326-b280-9dbd2d633e0b" containerName="registry-server" containerID="cri-o://42a78e03542d65f23fc8a5831e890c81922e19014aacd781c69a43ce23f71f5f" gracePeriod=30 Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.519451 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9cmnk"] Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.519936 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" podUID="8f3783e9-776b-434b-8298-59283076969f" containerName="marketplace-operator" containerID="cri-o://ce3f5d3b958e81b6a86db456f732174111485edf0d6c46d6c5bd56abad10844d" gracePeriod=30 Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.532554 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-djrt9"] Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.533057 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-djrt9" podUID="637c7ba4-2cae-4d56-860f-ab82722169a2" containerName="registry-server" containerID="cri-o://85bb5de5b055d83bd1e007d1bac7699f58e8bb5785ec40e961cd2624a3a35964" gracePeriod=30 Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.540513 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8hdq2"] Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.542299 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8hdq2" podUID="d60e5f01-76f1-47a0-8a7d-390457ce1b47" containerName="registry-server" containerID="cri-o://1065967f7b2abda19bab9f01f363f18504bd76dc4ee78f25e51a2db69e0423b7" gracePeriod=30 Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.552117 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q4p7z"] Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.553057 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q4p7z" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.566798 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q4p7z"] Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.610126 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5374369b-4aee-4c66-98fe-7bb183b4fdfa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q4p7z\" (UID: \"5374369b-4aee-4c66-98fe-7bb183b4fdfa\") " pod="openshift-marketplace/marketplace-operator-79b997595-q4p7z" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.610184 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5374369b-4aee-4c66-98fe-7bb183b4fdfa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q4p7z\" (UID: \"5374369b-4aee-4c66-98fe-7bb183b4fdfa\") " pod="openshift-marketplace/marketplace-operator-79b997595-q4p7z" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.610217 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvvfb\" (UniqueName: \"kubernetes.io/projected/5374369b-4aee-4c66-98fe-7bb183b4fdfa-kube-api-access-vvvfb\") pod \"marketplace-operator-79b997595-q4p7z\" (UID: \"5374369b-4aee-4c66-98fe-7bb183b4fdfa\") " pod="openshift-marketplace/marketplace-operator-79b997595-q4p7z" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.711641 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5374369b-4aee-4c66-98fe-7bb183b4fdfa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q4p7z\" (UID: \"5374369b-4aee-4c66-98fe-7bb183b4fdfa\") " pod="openshift-marketplace/marketplace-operator-79b997595-q4p7z" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.711691 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5374369b-4aee-4c66-98fe-7bb183b4fdfa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q4p7z\" (UID: \"5374369b-4aee-4c66-98fe-7bb183b4fdfa\") " pod="openshift-marketplace/marketplace-operator-79b997595-q4p7z" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.711733 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvvfb\" (UniqueName: \"kubernetes.io/projected/5374369b-4aee-4c66-98fe-7bb183b4fdfa-kube-api-access-vvvfb\") pod \"marketplace-operator-79b997595-q4p7z\" (UID: \"5374369b-4aee-4c66-98fe-7bb183b4fdfa\") " pod="openshift-marketplace/marketplace-operator-79b997595-q4p7z" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.713325 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kz7n9"] Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.713489 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5374369b-4aee-4c66-98fe-7bb183b4fdfa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q4p7z\" (UID: \"5374369b-4aee-4c66-98fe-7bb183b4fdfa\") " pod="openshift-marketplace/marketplace-operator-79b997595-q4p7z" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.715137 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz7n9" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.720011 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5374369b-4aee-4c66-98fe-7bb183b4fdfa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q4p7z\" (UID: \"5374369b-4aee-4c66-98fe-7bb183b4fdfa\") " pod="openshift-marketplace/marketplace-operator-79b997595-q4p7z" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.728891 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kz7n9"] Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.740521 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvvfb\" (UniqueName: \"kubernetes.io/projected/5374369b-4aee-4c66-98fe-7bb183b4fdfa-kube-api-access-vvvfb\") pod \"marketplace-operator-79b997595-q4p7z\" (UID: \"5374369b-4aee-4c66-98fe-7bb183b4fdfa\") " pod="openshift-marketplace/marketplace-operator-79b997595-q4p7z" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.813134 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vknh\" (UniqueName: \"kubernetes.io/projected/4e419ec9-0814-4199-ae59-f47408ec961d-kube-api-access-9vknh\") pod \"certified-operators-kz7n9\" (UID: \"4e419ec9-0814-4199-ae59-f47408ec961d\") " pod="openshift-marketplace/certified-operators-kz7n9" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.813179 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e419ec9-0814-4199-ae59-f47408ec961d-catalog-content\") pod \"certified-operators-kz7n9\" (UID: \"4e419ec9-0814-4199-ae59-f47408ec961d\") " pod="openshift-marketplace/certified-operators-kz7n9" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.813221 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e419ec9-0814-4199-ae59-f47408ec961d-utilities\") pod \"certified-operators-kz7n9\" (UID: \"4e419ec9-0814-4199-ae59-f47408ec961d\") " pod="openshift-marketplace/certified-operators-kz7n9" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.911463 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nzgvx"] Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.913537 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzgvx" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.914562 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vknh\" (UniqueName: \"kubernetes.io/projected/4e419ec9-0814-4199-ae59-f47408ec961d-kube-api-access-9vknh\") pod \"certified-operators-kz7n9\" (UID: \"4e419ec9-0814-4199-ae59-f47408ec961d\") " pod="openshift-marketplace/certified-operators-kz7n9" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.914653 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e419ec9-0814-4199-ae59-f47408ec961d-catalog-content\") pod \"certified-operators-kz7n9\" (UID: \"4e419ec9-0814-4199-ae59-f47408ec961d\") " pod="openshift-marketplace/certified-operators-kz7n9" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.914695 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e419ec9-0814-4199-ae59-f47408ec961d-utilities\") pod \"certified-operators-kz7n9\" (UID: \"4e419ec9-0814-4199-ae59-f47408ec961d\") " pod="openshift-marketplace/certified-operators-kz7n9" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.916065 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e419ec9-0814-4199-ae59-f47408ec961d-utilities\") pod \"certified-operators-kz7n9\" (UID: \"4e419ec9-0814-4199-ae59-f47408ec961d\") " pod="openshift-marketplace/certified-operators-kz7n9" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.916288 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e419ec9-0814-4199-ae59-f47408ec961d-catalog-content\") pod \"certified-operators-kz7n9\" (UID: \"4e419ec9-0814-4199-ae59-f47408ec961d\") " pod="openshift-marketplace/certified-operators-kz7n9" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.929183 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nzgvx"] Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.950579 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q4p7z" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.951112 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vknh\" (UniqueName: \"kubernetes.io/projected/4e419ec9-0814-4199-ae59-f47408ec961d-kube-api-access-9vknh\") pod \"certified-operators-kz7n9\" (UID: \"4e419ec9-0814-4199-ae59-f47408ec961d\") " pod="openshift-marketplace/certified-operators-kz7n9" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.967375 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz7n9" Jan 26 12:58:23 crc kubenswrapper[4844]: I0126 12:58:23.971072 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-982kx" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.003583 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-djrt9" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.005342 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhjls" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.007485 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.016840 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70-catalog-content\") pod \"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70\" (UID: \"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70\") " Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.016969 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/637c7ba4-2cae-4d56-860f-ab82722169a2-catalog-content\") pod \"637c7ba4-2cae-4d56-860f-ab82722169a2\" (UID: \"637c7ba4-2cae-4d56-860f-ab82722169a2\") " Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.017056 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/637c7ba4-2cae-4d56-860f-ab82722169a2-utilities\") pod \"637c7ba4-2cae-4d56-860f-ab82722169a2\" (UID: \"637c7ba4-2cae-4d56-860f-ab82722169a2\") " Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.017135 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70-utilities\") pod \"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70\" (UID: \"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70\") " Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.017181 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqvbp\" (UniqueName: \"kubernetes.io/projected/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70-kube-api-access-cqvbp\") pod \"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70\" (UID: \"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70\") " Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.017247 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t96w4\" (UniqueName: \"kubernetes.io/projected/637c7ba4-2cae-4d56-860f-ab82722169a2-kube-api-access-t96w4\") pod \"637c7ba4-2cae-4d56-860f-ab82722169a2\" (UID: \"637c7ba4-2cae-4d56-860f-ab82722169a2\") " Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.018711 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/637c7ba4-2cae-4d56-860f-ab82722169a2-utilities" (OuterVolumeSpecName: "utilities") pod "637c7ba4-2cae-4d56-860f-ab82722169a2" (UID: "637c7ba4-2cae-4d56-860f-ab82722169a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.019107 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d-utilities\") pod \"community-operators-nzgvx\" (UID: \"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d\") " pod="openshift-marketplace/community-operators-nzgvx" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.022191 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9s5n\" (UniqueName: \"kubernetes.io/projected/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d-kube-api-access-j9s5n\") pod \"community-operators-nzgvx\" (UID: \"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d\") " pod="openshift-marketplace/community-operators-nzgvx" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.022361 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d-catalog-content\") pod \"community-operators-nzgvx\" (UID: \"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d\") " pod="openshift-marketplace/community-operators-nzgvx" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.022444 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/637c7ba4-2cae-4d56-860f-ab82722169a2-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.029346 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70-utilities" (OuterVolumeSpecName: "utilities") pod "1b7b1cea-f94c-4750-8db8-18d9b7f9fb70" (UID: "1b7b1cea-f94c-4750-8db8-18d9b7f9fb70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.029697 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70-kube-api-access-cqvbp" (OuterVolumeSpecName: "kube-api-access-cqvbp") pod "1b7b1cea-f94c-4750-8db8-18d9b7f9fb70" (UID: "1b7b1cea-f94c-4750-8db8-18d9b7f9fb70"). InnerVolumeSpecName "kube-api-access-cqvbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.032302 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/637c7ba4-2cae-4d56-860f-ab82722169a2-kube-api-access-t96w4" (OuterVolumeSpecName: "kube-api-access-t96w4") pod "637c7ba4-2cae-4d56-860f-ab82722169a2" (UID: "637c7ba4-2cae-4d56-860f-ab82722169a2"). InnerVolumeSpecName "kube-api-access-t96w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.088319 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/637c7ba4-2cae-4d56-860f-ab82722169a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "637c7ba4-2cae-4d56-860f-ab82722169a2" (UID: "637c7ba4-2cae-4d56-860f-ab82722169a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.097986 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b7b1cea-f94c-4750-8db8-18d9b7f9fb70" (UID: "1b7b1cea-f94c-4750-8db8-18d9b7f9fb70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.126361 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hljrl\" (UniqueName: \"kubernetes.io/projected/a37a9c59-7c20-4326-b280-9dbd2d633e0b-kube-api-access-hljrl\") pod \"a37a9c59-7c20-4326-b280-9dbd2d633e0b\" (UID: \"a37a9c59-7c20-4326-b280-9dbd2d633e0b\") " Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.126483 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a37a9c59-7c20-4326-b280-9dbd2d633e0b-utilities\") pod \"a37a9c59-7c20-4326-b280-9dbd2d633e0b\" (UID: \"a37a9c59-7c20-4326-b280-9dbd2d633e0b\") " Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.126553 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a37a9c59-7c20-4326-b280-9dbd2d633e0b-catalog-content\") pod \"a37a9c59-7c20-4326-b280-9dbd2d633e0b\" (UID: \"a37a9c59-7c20-4326-b280-9dbd2d633e0b\") " Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.126624 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl4s4\" (UniqueName: \"kubernetes.io/projected/8f3783e9-776b-434b-8298-59283076969f-kube-api-access-tl4s4\") pod \"8f3783e9-776b-434b-8298-59283076969f\" (UID: \"8f3783e9-776b-434b-8298-59283076969f\") " Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.126666 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f3783e9-776b-434b-8298-59283076969f-marketplace-trusted-ca\") pod \"8f3783e9-776b-434b-8298-59283076969f\" (UID: \"8f3783e9-776b-434b-8298-59283076969f\") " Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.126719 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f3783e9-776b-434b-8298-59283076969f-marketplace-operator-metrics\") pod \"8f3783e9-776b-434b-8298-59283076969f\" (UID: \"8f3783e9-776b-434b-8298-59283076969f\") " Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.127310 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a37a9c59-7c20-4326-b280-9dbd2d633e0b-utilities" (OuterVolumeSpecName: "utilities") pod "a37a9c59-7c20-4326-b280-9dbd2d633e0b" (UID: "a37a9c59-7c20-4326-b280-9dbd2d633e0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.127742 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d-catalog-content\") pod \"community-operators-nzgvx\" (UID: \"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d\") " pod="openshift-marketplace/community-operators-nzgvx" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.127945 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d-utilities\") pod \"community-operators-nzgvx\" (UID: \"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d\") " pod="openshift-marketplace/community-operators-nzgvx" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.128013 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9s5n\" (UniqueName: \"kubernetes.io/projected/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d-kube-api-access-j9s5n\") pod \"community-operators-nzgvx\" (UID: \"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d\") " pod="openshift-marketplace/community-operators-nzgvx" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.128129 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a37a9c59-7c20-4326-b280-9dbd2d633e0b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.128141 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t96w4\" (UniqueName: \"kubernetes.io/projected/637c7ba4-2cae-4d56-860f-ab82722169a2-kube-api-access-t96w4\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.128151 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.128162 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/637c7ba4-2cae-4d56-860f-ab82722169a2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.128172 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.128182 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqvbp\" (UniqueName: \"kubernetes.io/projected/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70-kube-api-access-cqvbp\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.128623 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d-catalog-content\") pod \"community-operators-nzgvx\" (UID: \"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d\") " pod="openshift-marketplace/community-operators-nzgvx" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.129838 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d-utilities\") pod \"community-operators-nzgvx\" (UID: \"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d\") " pod="openshift-marketplace/community-operators-nzgvx" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.134481 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3783e9-776b-434b-8298-59283076969f-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8f3783e9-776b-434b-8298-59283076969f" (UID: "8f3783e9-776b-434b-8298-59283076969f"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.135830 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a37a9c59-7c20-4326-b280-9dbd2d633e0b-kube-api-access-hljrl" (OuterVolumeSpecName: "kube-api-access-hljrl") pod "a37a9c59-7c20-4326-b280-9dbd2d633e0b" (UID: "a37a9c59-7c20-4326-b280-9dbd2d633e0b"). InnerVolumeSpecName "kube-api-access-hljrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.136426 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f3783e9-776b-434b-8298-59283076969f-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8f3783e9-776b-434b-8298-59283076969f" (UID: "8f3783e9-776b-434b-8298-59283076969f"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.136575 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f3783e9-776b-434b-8298-59283076969f-kube-api-access-tl4s4" (OuterVolumeSpecName: "kube-api-access-tl4s4") pod "8f3783e9-776b-434b-8298-59283076969f" (UID: "8f3783e9-776b-434b-8298-59283076969f"). InnerVolumeSpecName "kube-api-access-tl4s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.147110 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9s5n\" (UniqueName: \"kubernetes.io/projected/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d-kube-api-access-j9s5n\") pod \"community-operators-nzgvx\" (UID: \"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d\") " pod="openshift-marketplace/community-operators-nzgvx" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.190996 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q4p7z"] Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.192849 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a37a9c59-7c20-4326-b280-9dbd2d633e0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a37a9c59-7c20-4326-b280-9dbd2d633e0b" (UID: "a37a9c59-7c20-4326-b280-9dbd2d633e0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:24 crc kubenswrapper[4844]: W0126 12:58:24.199210 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5374369b_4aee_4c66_98fe_7bb183b4fdfa.slice/crio-194ad29f72c9b516091c237581f355fdf82f284302fd991e420925ee97862158 WatchSource:0}: Error finding container 194ad29f72c9b516091c237581f355fdf82f284302fd991e420925ee97862158: Status 404 returned error can't find the container with id 194ad29f72c9b516091c237581f355fdf82f284302fd991e420925ee97862158 Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.230069 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hljrl\" (UniqueName: \"kubernetes.io/projected/a37a9c59-7c20-4326-b280-9dbd2d633e0b-kube-api-access-hljrl\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.230122 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a37a9c59-7c20-4326-b280-9dbd2d633e0b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.230134 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl4s4\" (UniqueName: \"kubernetes.io/projected/8f3783e9-776b-434b-8298-59283076969f-kube-api-access-tl4s4\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.230147 4844 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f3783e9-776b-434b-8298-59283076969f-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.230159 4844 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f3783e9-776b-434b-8298-59283076969f-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.271047 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kz7n9"] Jan 26 12:58:24 crc kubenswrapper[4844]: W0126 12:58:24.283777 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e419ec9_0814_4199_ae59_f47408ec961d.slice/crio-2d19bacbad0295ea25b547f56f00b71e00b1371c11d748e33257560b443783a8 WatchSource:0}: Error finding container 2d19bacbad0295ea25b547f56f00b71e00b1371c11d748e33257560b443783a8: Status 404 returned error can't find the container with id 2d19bacbad0295ea25b547f56f00b71e00b1371c11d748e33257560b443783a8 Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.292644 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzgvx" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.365480 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8hdq2" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.432948 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d60e5f01-76f1-47a0-8a7d-390457ce1b47-catalog-content\") pod \"d60e5f01-76f1-47a0-8a7d-390457ce1b47\" (UID: \"d60e5f01-76f1-47a0-8a7d-390457ce1b47\") " Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.432983 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w7zk\" (UniqueName: \"kubernetes.io/projected/d60e5f01-76f1-47a0-8a7d-390457ce1b47-kube-api-access-7w7zk\") pod \"d60e5f01-76f1-47a0-8a7d-390457ce1b47\" (UID: \"d60e5f01-76f1-47a0-8a7d-390457ce1b47\") " Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.433070 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d60e5f01-76f1-47a0-8a7d-390457ce1b47-utilities\") pod \"d60e5f01-76f1-47a0-8a7d-390457ce1b47\" (UID: \"d60e5f01-76f1-47a0-8a7d-390457ce1b47\") " Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.435543 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d60e5f01-76f1-47a0-8a7d-390457ce1b47-utilities" (OuterVolumeSpecName: "utilities") pod "d60e5f01-76f1-47a0-8a7d-390457ce1b47" (UID: "d60e5f01-76f1-47a0-8a7d-390457ce1b47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.443420 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d60e5f01-76f1-47a0-8a7d-390457ce1b47-kube-api-access-7w7zk" (OuterVolumeSpecName: "kube-api-access-7w7zk") pod "d60e5f01-76f1-47a0-8a7d-390457ce1b47" (UID: "d60e5f01-76f1-47a0-8a7d-390457ce1b47"). InnerVolumeSpecName "kube-api-access-7w7zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.508065 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nzgvx"] Jan 26 12:58:24 crc kubenswrapper[4844]: W0126 12:58:24.518501 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod740f9914_7e12_4cdc_b61f_4ce2f43a5e8d.slice/crio-596ad8f3dc8df3ca8e5c65e8ed5dafde58f6b4ee48c829764766e6a4da046663 WatchSource:0}: Error finding container 596ad8f3dc8df3ca8e5c65e8ed5dafde58f6b4ee48c829764766e6a4da046663: Status 404 returned error can't find the container with id 596ad8f3dc8df3ca8e5c65e8ed5dafde58f6b4ee48c829764766e6a4da046663 Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.533762 4844 generic.go:334] "Generic (PLEG): container finished" podID="8f3783e9-776b-434b-8298-59283076969f" containerID="ce3f5d3b958e81b6a86db456f732174111485edf0d6c46d6c5bd56abad10844d" exitCode=0 Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.533847 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" event={"ID":"8f3783e9-776b-434b-8298-59283076969f","Type":"ContainerDied","Data":"ce3f5d3b958e81b6a86db456f732174111485edf0d6c46d6c5bd56abad10844d"} Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.533880 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" event={"ID":"8f3783e9-776b-434b-8298-59283076969f","Type":"ContainerDied","Data":"26db9da30c759a3f9966e36157826bcf2a1d507e38193de2aff8e91eb4ab4089"} Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.533905 4844 scope.go:117] "RemoveContainer" containerID="ce3f5d3b958e81b6a86db456f732174111485edf0d6c46d6c5bd56abad10844d" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.534015 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9cmnk" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.535229 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d60e5f01-76f1-47a0-8a7d-390457ce1b47-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.535271 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w7zk\" (UniqueName: \"kubernetes.io/projected/d60e5f01-76f1-47a0-8a7d-390457ce1b47-kube-api-access-7w7zk\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.537644 4844 generic.go:334] "Generic (PLEG): container finished" podID="a37a9c59-7c20-4326-b280-9dbd2d633e0b" containerID="42a78e03542d65f23fc8a5831e890c81922e19014aacd781c69a43ce23f71f5f" exitCode=0 Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.537708 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhjls" event={"ID":"a37a9c59-7c20-4326-b280-9dbd2d633e0b","Type":"ContainerDied","Data":"42a78e03542d65f23fc8a5831e890c81922e19014aacd781c69a43ce23f71f5f"} Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.537740 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhjls" event={"ID":"a37a9c59-7c20-4326-b280-9dbd2d633e0b","Type":"ContainerDied","Data":"ddf41f6ec919716ea44abebdaa9f7bbfb57f26246beef8b7f356a28992d79336"} Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.537808 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhjls" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.550665 4844 generic.go:334] "Generic (PLEG): container finished" podID="637c7ba4-2cae-4d56-860f-ab82722169a2" containerID="85bb5de5b055d83bd1e007d1bac7699f58e8bb5785ec40e961cd2624a3a35964" exitCode=0 Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.550748 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-djrt9" event={"ID":"637c7ba4-2cae-4d56-860f-ab82722169a2","Type":"ContainerDied","Data":"85bb5de5b055d83bd1e007d1bac7699f58e8bb5785ec40e961cd2624a3a35964"} Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.550785 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-djrt9" event={"ID":"637c7ba4-2cae-4d56-860f-ab82722169a2","Type":"ContainerDied","Data":"2b8c0b752822432cdf6de68d71dc8c1bf82c8b4db91b6e57c38c243415ba2a9e"} Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.550871 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-djrt9" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.551375 4844 scope.go:117] "RemoveContainer" containerID="ce3f5d3b958e81b6a86db456f732174111485edf0d6c46d6c5bd56abad10844d" Jan 26 12:58:24 crc kubenswrapper[4844]: E0126 12:58:24.554113 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce3f5d3b958e81b6a86db456f732174111485edf0d6c46d6c5bd56abad10844d\": container with ID starting with ce3f5d3b958e81b6a86db456f732174111485edf0d6c46d6c5bd56abad10844d not found: ID does not exist" containerID="ce3f5d3b958e81b6a86db456f732174111485edf0d6c46d6c5bd56abad10844d" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.554183 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce3f5d3b958e81b6a86db456f732174111485edf0d6c46d6c5bd56abad10844d"} err="failed to get container status \"ce3f5d3b958e81b6a86db456f732174111485edf0d6c46d6c5bd56abad10844d\": rpc error: code = NotFound desc = could not find container \"ce3f5d3b958e81b6a86db456f732174111485edf0d6c46d6c5bd56abad10844d\": container with ID starting with ce3f5d3b958e81b6a86db456f732174111485edf0d6c46d6c5bd56abad10844d not found: ID does not exist" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.554217 4844 scope.go:117] "RemoveContainer" containerID="42a78e03542d65f23fc8a5831e890c81922e19014aacd781c69a43ce23f71f5f" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.556771 4844 generic.go:334] "Generic (PLEG): container finished" podID="d60e5f01-76f1-47a0-8a7d-390457ce1b47" containerID="1065967f7b2abda19bab9f01f363f18504bd76dc4ee78f25e51a2db69e0423b7" exitCode=0 Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.556886 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hdq2" event={"ID":"d60e5f01-76f1-47a0-8a7d-390457ce1b47","Type":"ContainerDied","Data":"1065967f7b2abda19bab9f01f363f18504bd76dc4ee78f25e51a2db69e0423b7"} Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.556942 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hdq2" event={"ID":"d60e5f01-76f1-47a0-8a7d-390457ce1b47","Type":"ContainerDied","Data":"58e1012f91986119fa18986fc54d6c3054e57becf30854dc277b3bc2306a0315"} Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.557049 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8hdq2" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.561100 4844 generic.go:334] "Generic (PLEG): container finished" podID="4e419ec9-0814-4199-ae59-f47408ec961d" containerID="f4421ef7e97a16948b68b33037403dc73fb8c6f5dc548976faf3c1148a7c3a18" exitCode=0 Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.561260 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz7n9" event={"ID":"4e419ec9-0814-4199-ae59-f47408ec961d","Type":"ContainerDied","Data":"f4421ef7e97a16948b68b33037403dc73fb8c6f5dc548976faf3c1148a7c3a18"} Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.561615 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz7n9" event={"ID":"4e419ec9-0814-4199-ae59-f47408ec961d","Type":"ContainerStarted","Data":"2d19bacbad0295ea25b547f56f00b71e00b1371c11d748e33257560b443783a8"} Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.563704 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.567805 4844 generic.go:334] "Generic (PLEG): container finished" podID="1b7b1cea-f94c-4750-8db8-18d9b7f9fb70" containerID="75837018c1ec0a5f226f3ed48de9b1c248d7aecb4fdbaec9bf992ef3130dcd21" exitCode=0 Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.567989 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-982kx" event={"ID":"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70","Type":"ContainerDied","Data":"75837018c1ec0a5f226f3ed48de9b1c248d7aecb4fdbaec9bf992ef3130dcd21"} Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.568072 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-982kx" event={"ID":"1b7b1cea-f94c-4750-8db8-18d9b7f9fb70","Type":"ContainerDied","Data":"a72111891bff6b030c2f006af8dbcdb3dc93eeb5366108178016d8d726c69735"} Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.568098 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-982kx" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.571049 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q4p7z" event={"ID":"5374369b-4aee-4c66-98fe-7bb183b4fdfa","Type":"ContainerStarted","Data":"2491045224e9bb80390b998d1004c2f7ea3a29fc11238859119371647b4765de"} Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.571108 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q4p7z" event={"ID":"5374369b-4aee-4c66-98fe-7bb183b4fdfa","Type":"ContainerStarted","Data":"194ad29f72c9b516091c237581f355fdf82f284302fd991e420925ee97862158"} Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.571945 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d60e5f01-76f1-47a0-8a7d-390457ce1b47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d60e5f01-76f1-47a0-8a7d-390457ce1b47" (UID: "d60e5f01-76f1-47a0-8a7d-390457ce1b47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.582885 4844 scope.go:117] "RemoveContainer" containerID="7e1fa8f2e1f7283fd46bc1920be2a595f9dcec895b40b91e507a174b1439e365" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.604756 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lhjls"] Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.608514 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lhjls"] Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.616934 4844 scope.go:117] "RemoveContainer" containerID="cfe1f5826adb5e70eca6e69b3a2d46e940585099c1e8c130e79e5312de77dc33" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.637433 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-q4p7z" podStartSLOduration=1.6373808109999999 podStartE2EDuration="1.637380811s" podCreationTimestamp="2026-01-26 12:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:58:24.634036089 +0000 UTC m=+881.567403711" watchObservedRunningTime="2026-01-26 12:58:24.637380811 +0000 UTC m=+881.570748443" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.647274 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d60e5f01-76f1-47a0-8a7d-390457ce1b47-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.661282 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-djrt9"] Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.665858 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-djrt9"] Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.671417 4844 scope.go:117] "RemoveContainer" containerID="42a78e03542d65f23fc8a5831e890c81922e19014aacd781c69a43ce23f71f5f" Jan 26 12:58:24 crc kubenswrapper[4844]: E0126 12:58:24.672051 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42a78e03542d65f23fc8a5831e890c81922e19014aacd781c69a43ce23f71f5f\": container with ID starting with 42a78e03542d65f23fc8a5831e890c81922e19014aacd781c69a43ce23f71f5f not found: ID does not exist" containerID="42a78e03542d65f23fc8a5831e890c81922e19014aacd781c69a43ce23f71f5f" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.672156 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42a78e03542d65f23fc8a5831e890c81922e19014aacd781c69a43ce23f71f5f"} err="failed to get container status \"42a78e03542d65f23fc8a5831e890c81922e19014aacd781c69a43ce23f71f5f\": rpc error: code = NotFound desc = could not find container \"42a78e03542d65f23fc8a5831e890c81922e19014aacd781c69a43ce23f71f5f\": container with ID starting with 42a78e03542d65f23fc8a5831e890c81922e19014aacd781c69a43ce23f71f5f not found: ID does not exist" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.672293 4844 scope.go:117] "RemoveContainer" containerID="7e1fa8f2e1f7283fd46bc1920be2a595f9dcec895b40b91e507a174b1439e365" Jan 26 12:58:24 crc kubenswrapper[4844]: E0126 12:58:24.672566 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e1fa8f2e1f7283fd46bc1920be2a595f9dcec895b40b91e507a174b1439e365\": container with ID starting with 7e1fa8f2e1f7283fd46bc1920be2a595f9dcec895b40b91e507a174b1439e365 not found: ID does not exist" containerID="7e1fa8f2e1f7283fd46bc1920be2a595f9dcec895b40b91e507a174b1439e365" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.672683 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e1fa8f2e1f7283fd46bc1920be2a595f9dcec895b40b91e507a174b1439e365"} err="failed to get container status \"7e1fa8f2e1f7283fd46bc1920be2a595f9dcec895b40b91e507a174b1439e365\": rpc error: code = NotFound desc = could not find container \"7e1fa8f2e1f7283fd46bc1920be2a595f9dcec895b40b91e507a174b1439e365\": container with ID starting with 7e1fa8f2e1f7283fd46bc1920be2a595f9dcec895b40b91e507a174b1439e365 not found: ID does not exist" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.672783 4844 scope.go:117] "RemoveContainer" containerID="cfe1f5826adb5e70eca6e69b3a2d46e940585099c1e8c130e79e5312de77dc33" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.676266 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9cmnk"] Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.676319 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9cmnk"] Jan 26 12:58:24 crc kubenswrapper[4844]: E0126 12:58:24.676437 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfe1f5826adb5e70eca6e69b3a2d46e940585099c1e8c130e79e5312de77dc33\": container with ID starting with cfe1f5826adb5e70eca6e69b3a2d46e940585099c1e8c130e79e5312de77dc33 not found: ID does not exist" containerID="cfe1f5826adb5e70eca6e69b3a2d46e940585099c1e8c130e79e5312de77dc33" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.676475 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfe1f5826adb5e70eca6e69b3a2d46e940585099c1e8c130e79e5312de77dc33"} err="failed to get container status \"cfe1f5826adb5e70eca6e69b3a2d46e940585099c1e8c130e79e5312de77dc33\": rpc error: code = NotFound desc = could not find container \"cfe1f5826adb5e70eca6e69b3a2d46e940585099c1e8c130e79e5312de77dc33\": container with ID starting with cfe1f5826adb5e70eca6e69b3a2d46e940585099c1e8c130e79e5312de77dc33 not found: ID does not exist" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.676508 4844 scope.go:117] "RemoveContainer" containerID="85bb5de5b055d83bd1e007d1bac7699f58e8bb5785ec40e961cd2624a3a35964" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.688654 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-982kx"] Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.691692 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-982kx"] Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.748571 4844 scope.go:117] "RemoveContainer" containerID="f0c16bd2a3660b20ac550315485247a49fdd58ecbdc0fd3acc52987525740e1e" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.762848 4844 scope.go:117] "RemoveContainer" containerID="a6de43053e99ae8a42f4c96cac94a588675aeae61cfd1b879315b5c949fdccd1" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.784170 4844 scope.go:117] "RemoveContainer" containerID="85bb5de5b055d83bd1e007d1bac7699f58e8bb5785ec40e961cd2624a3a35964" Jan 26 12:58:24 crc kubenswrapper[4844]: E0126 12:58:24.785055 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85bb5de5b055d83bd1e007d1bac7699f58e8bb5785ec40e961cd2624a3a35964\": container with ID starting with 85bb5de5b055d83bd1e007d1bac7699f58e8bb5785ec40e961cd2624a3a35964 not found: ID does not exist" containerID="85bb5de5b055d83bd1e007d1bac7699f58e8bb5785ec40e961cd2624a3a35964" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.785104 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85bb5de5b055d83bd1e007d1bac7699f58e8bb5785ec40e961cd2624a3a35964"} err="failed to get container status \"85bb5de5b055d83bd1e007d1bac7699f58e8bb5785ec40e961cd2624a3a35964\": rpc error: code = NotFound desc = could not find container \"85bb5de5b055d83bd1e007d1bac7699f58e8bb5785ec40e961cd2624a3a35964\": container with ID starting with 85bb5de5b055d83bd1e007d1bac7699f58e8bb5785ec40e961cd2624a3a35964 not found: ID does not exist" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.785139 4844 scope.go:117] "RemoveContainer" containerID="f0c16bd2a3660b20ac550315485247a49fdd58ecbdc0fd3acc52987525740e1e" Jan 26 12:58:24 crc kubenswrapper[4844]: E0126 12:58:24.785524 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0c16bd2a3660b20ac550315485247a49fdd58ecbdc0fd3acc52987525740e1e\": container with ID starting with f0c16bd2a3660b20ac550315485247a49fdd58ecbdc0fd3acc52987525740e1e not found: ID does not exist" containerID="f0c16bd2a3660b20ac550315485247a49fdd58ecbdc0fd3acc52987525740e1e" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.785577 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0c16bd2a3660b20ac550315485247a49fdd58ecbdc0fd3acc52987525740e1e"} err="failed to get container status \"f0c16bd2a3660b20ac550315485247a49fdd58ecbdc0fd3acc52987525740e1e\": rpc error: code = NotFound desc = could not find container \"f0c16bd2a3660b20ac550315485247a49fdd58ecbdc0fd3acc52987525740e1e\": container with ID starting with f0c16bd2a3660b20ac550315485247a49fdd58ecbdc0fd3acc52987525740e1e not found: ID does not exist" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.785620 4844 scope.go:117] "RemoveContainer" containerID="a6de43053e99ae8a42f4c96cac94a588675aeae61cfd1b879315b5c949fdccd1" Jan 26 12:58:24 crc kubenswrapper[4844]: E0126 12:58:24.786015 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6de43053e99ae8a42f4c96cac94a588675aeae61cfd1b879315b5c949fdccd1\": container with ID starting with a6de43053e99ae8a42f4c96cac94a588675aeae61cfd1b879315b5c949fdccd1 not found: ID does not exist" containerID="a6de43053e99ae8a42f4c96cac94a588675aeae61cfd1b879315b5c949fdccd1" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.786048 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6de43053e99ae8a42f4c96cac94a588675aeae61cfd1b879315b5c949fdccd1"} err="failed to get container status \"a6de43053e99ae8a42f4c96cac94a588675aeae61cfd1b879315b5c949fdccd1\": rpc error: code = NotFound desc = could not find container \"a6de43053e99ae8a42f4c96cac94a588675aeae61cfd1b879315b5c949fdccd1\": container with ID starting with a6de43053e99ae8a42f4c96cac94a588675aeae61cfd1b879315b5c949fdccd1 not found: ID does not exist" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.786066 4844 scope.go:117] "RemoveContainer" containerID="1065967f7b2abda19bab9f01f363f18504bd76dc4ee78f25e51a2db69e0423b7" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.801999 4844 scope.go:117] "RemoveContainer" containerID="eac84807bdc05230adf2521f712ba6368e54b87d69fc89a4b300dc23cdc751a6" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.820778 4844 scope.go:117] "RemoveContainer" containerID="f8b54dd269f366df04fc16928a0bc3b77009ecace479a1dfc5409e8affd98604" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.835847 4844 scope.go:117] "RemoveContainer" containerID="1065967f7b2abda19bab9f01f363f18504bd76dc4ee78f25e51a2db69e0423b7" Jan 26 12:58:24 crc kubenswrapper[4844]: E0126 12:58:24.837656 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1065967f7b2abda19bab9f01f363f18504bd76dc4ee78f25e51a2db69e0423b7\": container with ID starting with 1065967f7b2abda19bab9f01f363f18504bd76dc4ee78f25e51a2db69e0423b7 not found: ID does not exist" containerID="1065967f7b2abda19bab9f01f363f18504bd76dc4ee78f25e51a2db69e0423b7" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.837700 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1065967f7b2abda19bab9f01f363f18504bd76dc4ee78f25e51a2db69e0423b7"} err="failed to get container status \"1065967f7b2abda19bab9f01f363f18504bd76dc4ee78f25e51a2db69e0423b7\": rpc error: code = NotFound desc = could not find container \"1065967f7b2abda19bab9f01f363f18504bd76dc4ee78f25e51a2db69e0423b7\": container with ID starting with 1065967f7b2abda19bab9f01f363f18504bd76dc4ee78f25e51a2db69e0423b7 not found: ID does not exist" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.837733 4844 scope.go:117] "RemoveContainer" containerID="eac84807bdc05230adf2521f712ba6368e54b87d69fc89a4b300dc23cdc751a6" Jan 26 12:58:24 crc kubenswrapper[4844]: E0126 12:58:24.838625 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eac84807bdc05230adf2521f712ba6368e54b87d69fc89a4b300dc23cdc751a6\": container with ID starting with eac84807bdc05230adf2521f712ba6368e54b87d69fc89a4b300dc23cdc751a6 not found: ID does not exist" containerID="eac84807bdc05230adf2521f712ba6368e54b87d69fc89a4b300dc23cdc751a6" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.838685 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac84807bdc05230adf2521f712ba6368e54b87d69fc89a4b300dc23cdc751a6"} err="failed to get container status \"eac84807bdc05230adf2521f712ba6368e54b87d69fc89a4b300dc23cdc751a6\": rpc error: code = NotFound desc = could not find container \"eac84807bdc05230adf2521f712ba6368e54b87d69fc89a4b300dc23cdc751a6\": container with ID starting with eac84807bdc05230adf2521f712ba6368e54b87d69fc89a4b300dc23cdc751a6 not found: ID does not exist" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.838721 4844 scope.go:117] "RemoveContainer" containerID="f8b54dd269f366df04fc16928a0bc3b77009ecace479a1dfc5409e8affd98604" Jan 26 12:58:24 crc kubenswrapper[4844]: E0126 12:58:24.839209 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8b54dd269f366df04fc16928a0bc3b77009ecace479a1dfc5409e8affd98604\": container with ID starting with f8b54dd269f366df04fc16928a0bc3b77009ecace479a1dfc5409e8affd98604 not found: ID does not exist" containerID="f8b54dd269f366df04fc16928a0bc3b77009ecace479a1dfc5409e8affd98604" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.839254 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8b54dd269f366df04fc16928a0bc3b77009ecace479a1dfc5409e8affd98604"} err="failed to get container status \"f8b54dd269f366df04fc16928a0bc3b77009ecace479a1dfc5409e8affd98604\": rpc error: code = NotFound desc = could not find container \"f8b54dd269f366df04fc16928a0bc3b77009ecace479a1dfc5409e8affd98604\": container with ID starting with f8b54dd269f366df04fc16928a0bc3b77009ecace479a1dfc5409e8affd98604 not found: ID does not exist" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.839277 4844 scope.go:117] "RemoveContainer" containerID="75837018c1ec0a5f226f3ed48de9b1c248d7aecb4fdbaec9bf992ef3130dcd21" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.853704 4844 scope.go:117] "RemoveContainer" containerID="e6a7c8d051fb7d049c17bd3c8350d85fbfe8095a716f971036092479e889a943" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.874348 4844 scope.go:117] "RemoveContainer" containerID="c1e1ff99d7536b1b6d1127405a72c5e21ddbb3f138c1a788fce7003e2cde1af8" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.897359 4844 scope.go:117] "RemoveContainer" containerID="75837018c1ec0a5f226f3ed48de9b1c248d7aecb4fdbaec9bf992ef3130dcd21" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.900481 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8hdq2"] Jan 26 12:58:24 crc kubenswrapper[4844]: E0126 12:58:24.904055 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75837018c1ec0a5f226f3ed48de9b1c248d7aecb4fdbaec9bf992ef3130dcd21\": container with ID starting with 75837018c1ec0a5f226f3ed48de9b1c248d7aecb4fdbaec9bf992ef3130dcd21 not found: ID does not exist" containerID="75837018c1ec0a5f226f3ed48de9b1c248d7aecb4fdbaec9bf992ef3130dcd21" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.904134 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75837018c1ec0a5f226f3ed48de9b1c248d7aecb4fdbaec9bf992ef3130dcd21"} err="failed to get container status \"75837018c1ec0a5f226f3ed48de9b1c248d7aecb4fdbaec9bf992ef3130dcd21\": rpc error: code = NotFound desc = could not find container \"75837018c1ec0a5f226f3ed48de9b1c248d7aecb4fdbaec9bf992ef3130dcd21\": container with ID starting with 75837018c1ec0a5f226f3ed48de9b1c248d7aecb4fdbaec9bf992ef3130dcd21 not found: ID does not exist" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.904182 4844 scope.go:117] "RemoveContainer" containerID="e6a7c8d051fb7d049c17bd3c8350d85fbfe8095a716f971036092479e889a943" Jan 26 12:58:24 crc kubenswrapper[4844]: E0126 12:58:24.904732 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6a7c8d051fb7d049c17bd3c8350d85fbfe8095a716f971036092479e889a943\": container with ID starting with e6a7c8d051fb7d049c17bd3c8350d85fbfe8095a716f971036092479e889a943 not found: ID does not exist" containerID="e6a7c8d051fb7d049c17bd3c8350d85fbfe8095a716f971036092479e889a943" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.904813 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6a7c8d051fb7d049c17bd3c8350d85fbfe8095a716f971036092479e889a943"} err="failed to get container status \"e6a7c8d051fb7d049c17bd3c8350d85fbfe8095a716f971036092479e889a943\": rpc error: code = NotFound desc = could not find container \"e6a7c8d051fb7d049c17bd3c8350d85fbfe8095a716f971036092479e889a943\": container with ID starting with e6a7c8d051fb7d049c17bd3c8350d85fbfe8095a716f971036092479e889a943 not found: ID does not exist" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.904846 4844 scope.go:117] "RemoveContainer" containerID="c1e1ff99d7536b1b6d1127405a72c5e21ddbb3f138c1a788fce7003e2cde1af8" Jan 26 12:58:24 crc kubenswrapper[4844]: E0126 12:58:24.905366 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1e1ff99d7536b1b6d1127405a72c5e21ddbb3f138c1a788fce7003e2cde1af8\": container with ID starting with c1e1ff99d7536b1b6d1127405a72c5e21ddbb3f138c1a788fce7003e2cde1af8 not found: ID does not exist" containerID="c1e1ff99d7536b1b6d1127405a72c5e21ddbb3f138c1a788fce7003e2cde1af8" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.905437 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1e1ff99d7536b1b6d1127405a72c5e21ddbb3f138c1a788fce7003e2cde1af8"} err="failed to get container status \"c1e1ff99d7536b1b6d1127405a72c5e21ddbb3f138c1a788fce7003e2cde1af8\": rpc error: code = NotFound desc = could not find container \"c1e1ff99d7536b1b6d1127405a72c5e21ddbb3f138c1a788fce7003e2cde1af8\": container with ID starting with c1e1ff99d7536b1b6d1127405a72c5e21ddbb3f138c1a788fce7003e2cde1af8 not found: ID does not exist" Jan 26 12:58:24 crc kubenswrapper[4844]: I0126 12:58:24.905990 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8hdq2"] Jan 26 12:58:24 crc kubenswrapper[4844]: E0126 12:58:24.941341 4844 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd60e5f01_76f1_47a0_8a7d_390457ce1b47.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd60e5f01_76f1_47a0_8a7d_390457ce1b47.slice/crio-58e1012f91986119fa18986fc54d6c3054e57becf30854dc277b3bc2306a0315\": RecentStats: unable to find data in memory cache]" Jan 26 12:58:25 crc kubenswrapper[4844]: I0126 12:58:25.327539 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b7b1cea-f94c-4750-8db8-18d9b7f9fb70" path="/var/lib/kubelet/pods/1b7b1cea-f94c-4750-8db8-18d9b7f9fb70/volumes" Jan 26 12:58:25 crc kubenswrapper[4844]: I0126 12:58:25.328636 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="637c7ba4-2cae-4d56-860f-ab82722169a2" path="/var/lib/kubelet/pods/637c7ba4-2cae-4d56-860f-ab82722169a2/volumes" Jan 26 12:58:25 crc kubenswrapper[4844]: I0126 12:58:25.329315 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f3783e9-776b-434b-8298-59283076969f" path="/var/lib/kubelet/pods/8f3783e9-776b-434b-8298-59283076969f/volumes" Jan 26 12:58:25 crc kubenswrapper[4844]: I0126 12:58:25.330187 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a37a9c59-7c20-4326-b280-9dbd2d633e0b" path="/var/lib/kubelet/pods/a37a9c59-7c20-4326-b280-9dbd2d633e0b/volumes" Jan 26 12:58:25 crc kubenswrapper[4844]: I0126 12:58:25.330775 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d60e5f01-76f1-47a0-8a7d-390457ce1b47" path="/var/lib/kubelet/pods/d60e5f01-76f1-47a0-8a7d-390457ce1b47/volumes" Jan 26 12:58:25 crc kubenswrapper[4844]: I0126 12:58:25.602980 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz7n9" event={"ID":"4e419ec9-0814-4199-ae59-f47408ec961d","Type":"ContainerStarted","Data":"9018dcd354fb99a9545b458e2e0a6c93f094418ad2c7a46e7ef6ae2ffcab62dd"} Jan 26 12:58:25 crc kubenswrapper[4844]: I0126 12:58:25.609081 4844 generic.go:334] "Generic (PLEG): container finished" podID="740f9914-7e12-4cdc-b61f-4ce2f43a5e8d" containerID="5a4f01df13129d9a68917c11324c2291f2e6b0521af1f08de8050dcdd2669327" exitCode=0 Jan 26 12:58:25 crc kubenswrapper[4844]: I0126 12:58:25.609189 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzgvx" event={"ID":"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d","Type":"ContainerDied","Data":"5a4f01df13129d9a68917c11324c2291f2e6b0521af1f08de8050dcdd2669327"} Jan 26 12:58:25 crc kubenswrapper[4844]: I0126 12:58:25.609237 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzgvx" event={"ID":"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d","Type":"ContainerStarted","Data":"596ad8f3dc8df3ca8e5c65e8ed5dafde58f6b4ee48c829764766e6a4da046663"} Jan 26 12:58:25 crc kubenswrapper[4844]: I0126 12:58:25.609903 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-q4p7z" Jan 26 12:58:25 crc kubenswrapper[4844]: I0126 12:58:25.620997 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-q4p7z" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.111353 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jjx57"] Jan 26 12:58:26 crc kubenswrapper[4844]: E0126 12:58:26.111684 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37a9c59-7c20-4326-b280-9dbd2d633e0b" containerName="extract-content" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.111700 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37a9c59-7c20-4326-b280-9dbd2d633e0b" containerName="extract-content" Jan 26 12:58:26 crc kubenswrapper[4844]: E0126 12:58:26.111710 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60e5f01-76f1-47a0-8a7d-390457ce1b47" containerName="extract-content" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.111719 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60e5f01-76f1-47a0-8a7d-390457ce1b47" containerName="extract-content" Jan 26 12:58:26 crc kubenswrapper[4844]: E0126 12:58:26.111731 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60e5f01-76f1-47a0-8a7d-390457ce1b47" containerName="registry-server" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.111738 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60e5f01-76f1-47a0-8a7d-390457ce1b47" containerName="registry-server" Jan 26 12:58:26 crc kubenswrapper[4844]: E0126 12:58:26.111748 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7b1cea-f94c-4750-8db8-18d9b7f9fb70" containerName="extract-content" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.111754 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7b1cea-f94c-4750-8db8-18d9b7f9fb70" containerName="extract-content" Jan 26 12:58:26 crc kubenswrapper[4844]: E0126 12:58:26.111768 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f3783e9-776b-434b-8298-59283076969f" containerName="marketplace-operator" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.111774 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f3783e9-776b-434b-8298-59283076969f" containerName="marketplace-operator" Jan 26 12:58:26 crc kubenswrapper[4844]: E0126 12:58:26.111780 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637c7ba4-2cae-4d56-860f-ab82722169a2" containerName="extract-utilities" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.111786 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="637c7ba4-2cae-4d56-860f-ab82722169a2" containerName="extract-utilities" Jan 26 12:58:26 crc kubenswrapper[4844]: E0126 12:58:26.111793 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7b1cea-f94c-4750-8db8-18d9b7f9fb70" containerName="registry-server" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.111799 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7b1cea-f94c-4750-8db8-18d9b7f9fb70" containerName="registry-server" Jan 26 12:58:26 crc kubenswrapper[4844]: E0126 12:58:26.111808 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60e5f01-76f1-47a0-8a7d-390457ce1b47" containerName="extract-utilities" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.111814 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60e5f01-76f1-47a0-8a7d-390457ce1b47" containerName="extract-utilities" Jan 26 12:58:26 crc kubenswrapper[4844]: E0126 12:58:26.111824 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37a9c59-7c20-4326-b280-9dbd2d633e0b" containerName="extract-utilities" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.111830 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37a9c59-7c20-4326-b280-9dbd2d633e0b" containerName="extract-utilities" Jan 26 12:58:26 crc kubenswrapper[4844]: E0126 12:58:26.111842 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637c7ba4-2cae-4d56-860f-ab82722169a2" containerName="registry-server" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.111848 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="637c7ba4-2cae-4d56-860f-ab82722169a2" containerName="registry-server" Jan 26 12:58:26 crc kubenswrapper[4844]: E0126 12:58:26.111861 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37a9c59-7c20-4326-b280-9dbd2d633e0b" containerName="registry-server" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.111868 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37a9c59-7c20-4326-b280-9dbd2d633e0b" containerName="registry-server" Jan 26 12:58:26 crc kubenswrapper[4844]: E0126 12:58:26.111875 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7b1cea-f94c-4750-8db8-18d9b7f9fb70" containerName="extract-utilities" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.111881 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7b1cea-f94c-4750-8db8-18d9b7f9fb70" containerName="extract-utilities" Jan 26 12:58:26 crc kubenswrapper[4844]: E0126 12:58:26.111892 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637c7ba4-2cae-4d56-860f-ab82722169a2" containerName="extract-content" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.111898 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="637c7ba4-2cae-4d56-860f-ab82722169a2" containerName="extract-content" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.111998 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="637c7ba4-2cae-4d56-860f-ab82722169a2" containerName="registry-server" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.112009 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f3783e9-776b-434b-8298-59283076969f" containerName="marketplace-operator" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.112020 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37a9c59-7c20-4326-b280-9dbd2d633e0b" containerName="registry-server" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.112027 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b7b1cea-f94c-4750-8db8-18d9b7f9fb70" containerName="registry-server" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.112036 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="d60e5f01-76f1-47a0-8a7d-390457ce1b47" containerName="registry-server" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.112944 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jjx57" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.117203 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.127105 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jjx57"] Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.171506 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4779355-4fd0-4b1d-adef-3e4ebba15903-catalog-content\") pod \"redhat-marketplace-jjx57\" (UID: \"a4779355-4fd0-4b1d-adef-3e4ebba15903\") " pod="openshift-marketplace/redhat-marketplace-jjx57" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.171575 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wnpr\" (UniqueName: \"kubernetes.io/projected/a4779355-4fd0-4b1d-adef-3e4ebba15903-kube-api-access-9wnpr\") pod \"redhat-marketplace-jjx57\" (UID: \"a4779355-4fd0-4b1d-adef-3e4ebba15903\") " pod="openshift-marketplace/redhat-marketplace-jjx57" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.171670 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4779355-4fd0-4b1d-adef-3e4ebba15903-utilities\") pod \"redhat-marketplace-jjx57\" (UID: \"a4779355-4fd0-4b1d-adef-3e4ebba15903\") " pod="openshift-marketplace/redhat-marketplace-jjx57" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.272856 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4779355-4fd0-4b1d-adef-3e4ebba15903-utilities\") pod \"redhat-marketplace-jjx57\" (UID: \"a4779355-4fd0-4b1d-adef-3e4ebba15903\") " pod="openshift-marketplace/redhat-marketplace-jjx57" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.272965 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4779355-4fd0-4b1d-adef-3e4ebba15903-catalog-content\") pod \"redhat-marketplace-jjx57\" (UID: \"a4779355-4fd0-4b1d-adef-3e4ebba15903\") " pod="openshift-marketplace/redhat-marketplace-jjx57" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.273057 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wnpr\" (UniqueName: \"kubernetes.io/projected/a4779355-4fd0-4b1d-adef-3e4ebba15903-kube-api-access-9wnpr\") pod \"redhat-marketplace-jjx57\" (UID: \"a4779355-4fd0-4b1d-adef-3e4ebba15903\") " pod="openshift-marketplace/redhat-marketplace-jjx57" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.273747 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4779355-4fd0-4b1d-adef-3e4ebba15903-utilities\") pod \"redhat-marketplace-jjx57\" (UID: \"a4779355-4fd0-4b1d-adef-3e4ebba15903\") " pod="openshift-marketplace/redhat-marketplace-jjx57" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.273850 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4779355-4fd0-4b1d-adef-3e4ebba15903-catalog-content\") pod \"redhat-marketplace-jjx57\" (UID: \"a4779355-4fd0-4b1d-adef-3e4ebba15903\") " pod="openshift-marketplace/redhat-marketplace-jjx57" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.316652 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m8rzx"] Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.317274 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wnpr\" (UniqueName: \"kubernetes.io/projected/a4779355-4fd0-4b1d-adef-3e4ebba15903-kube-api-access-9wnpr\") pod \"redhat-marketplace-jjx57\" (UID: \"a4779355-4fd0-4b1d-adef-3e4ebba15903\") " pod="openshift-marketplace/redhat-marketplace-jjx57" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.317930 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8rzx" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.320727 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.333490 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m8rzx"] Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.374091 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cf02a58-0976-482c-9e29-b8cb52254a3b-utilities\") pod \"redhat-operators-m8rzx\" (UID: \"9cf02a58-0976-482c-9e29-b8cb52254a3b\") " pod="openshift-marketplace/redhat-operators-m8rzx" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.374134 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cf02a58-0976-482c-9e29-b8cb52254a3b-catalog-content\") pod \"redhat-operators-m8rzx\" (UID: \"9cf02a58-0976-482c-9e29-b8cb52254a3b\") " pod="openshift-marketplace/redhat-operators-m8rzx" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.374367 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47bmc\" (UniqueName: \"kubernetes.io/projected/9cf02a58-0976-482c-9e29-b8cb52254a3b-kube-api-access-47bmc\") pod \"redhat-operators-m8rzx\" (UID: \"9cf02a58-0976-482c-9e29-b8cb52254a3b\") " pod="openshift-marketplace/redhat-operators-m8rzx" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.433682 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jjx57" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.476738 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cf02a58-0976-482c-9e29-b8cb52254a3b-utilities\") pod \"redhat-operators-m8rzx\" (UID: \"9cf02a58-0976-482c-9e29-b8cb52254a3b\") " pod="openshift-marketplace/redhat-operators-m8rzx" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.476808 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cf02a58-0976-482c-9e29-b8cb52254a3b-catalog-content\") pod \"redhat-operators-m8rzx\" (UID: \"9cf02a58-0976-482c-9e29-b8cb52254a3b\") " pod="openshift-marketplace/redhat-operators-m8rzx" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.476881 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47bmc\" (UniqueName: \"kubernetes.io/projected/9cf02a58-0976-482c-9e29-b8cb52254a3b-kube-api-access-47bmc\") pod \"redhat-operators-m8rzx\" (UID: \"9cf02a58-0976-482c-9e29-b8cb52254a3b\") " pod="openshift-marketplace/redhat-operators-m8rzx" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.477396 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cf02a58-0976-482c-9e29-b8cb52254a3b-utilities\") pod \"redhat-operators-m8rzx\" (UID: \"9cf02a58-0976-482c-9e29-b8cb52254a3b\") " pod="openshift-marketplace/redhat-operators-m8rzx" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.477727 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cf02a58-0976-482c-9e29-b8cb52254a3b-catalog-content\") pod \"redhat-operators-m8rzx\" (UID: \"9cf02a58-0976-482c-9e29-b8cb52254a3b\") " pod="openshift-marketplace/redhat-operators-m8rzx" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.523518 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nvhgf"] Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.525237 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nvhgf" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.523769 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47bmc\" (UniqueName: \"kubernetes.io/projected/9cf02a58-0976-482c-9e29-b8cb52254a3b-kube-api-access-47bmc\") pod \"redhat-operators-m8rzx\" (UID: \"9cf02a58-0976-482c-9e29-b8cb52254a3b\") " pod="openshift-marketplace/redhat-operators-m8rzx" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.545700 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nvhgf"] Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.578335 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd915f15-b87f-471f-94b7-cdeba3701dc6-utilities\") pod \"redhat-marketplace-nvhgf\" (UID: \"fd915f15-b87f-471f-94b7-cdeba3701dc6\") " pod="openshift-marketplace/redhat-marketplace-nvhgf" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.578381 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd915f15-b87f-471f-94b7-cdeba3701dc6-catalog-content\") pod \"redhat-marketplace-nvhgf\" (UID: \"fd915f15-b87f-471f-94b7-cdeba3701dc6\") " pod="openshift-marketplace/redhat-marketplace-nvhgf" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.578438 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zzkn\" (UniqueName: \"kubernetes.io/projected/fd915f15-b87f-471f-94b7-cdeba3701dc6-kube-api-access-7zzkn\") pod \"redhat-marketplace-nvhgf\" (UID: \"fd915f15-b87f-471f-94b7-cdeba3701dc6\") " pod="openshift-marketplace/redhat-marketplace-nvhgf" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.618213 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzgvx" event={"ID":"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d","Type":"ContainerStarted","Data":"11bd90e390235d47462ef1b21cbf3e34fd0cbb716a22e86cd3252e49ddb0a647"} Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.620078 4844 generic.go:334] "Generic (PLEG): container finished" podID="4e419ec9-0814-4199-ae59-f47408ec961d" containerID="9018dcd354fb99a9545b458e2e0a6c93f094418ad2c7a46e7ef6ae2ffcab62dd" exitCode=0 Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.620724 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz7n9" event={"ID":"4e419ec9-0814-4199-ae59-f47408ec961d","Type":"ContainerDied","Data":"9018dcd354fb99a9545b458e2e0a6c93f094418ad2c7a46e7ef6ae2ffcab62dd"} Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.670305 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8rzx" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.679438 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd915f15-b87f-471f-94b7-cdeba3701dc6-utilities\") pod \"redhat-marketplace-nvhgf\" (UID: \"fd915f15-b87f-471f-94b7-cdeba3701dc6\") " pod="openshift-marketplace/redhat-marketplace-nvhgf" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.679513 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd915f15-b87f-471f-94b7-cdeba3701dc6-catalog-content\") pod \"redhat-marketplace-nvhgf\" (UID: \"fd915f15-b87f-471f-94b7-cdeba3701dc6\") " pod="openshift-marketplace/redhat-marketplace-nvhgf" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.679634 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zzkn\" (UniqueName: \"kubernetes.io/projected/fd915f15-b87f-471f-94b7-cdeba3701dc6-kube-api-access-7zzkn\") pod \"redhat-marketplace-nvhgf\" (UID: \"fd915f15-b87f-471f-94b7-cdeba3701dc6\") " pod="openshift-marketplace/redhat-marketplace-nvhgf" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.680449 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd915f15-b87f-471f-94b7-cdeba3701dc6-utilities\") pod \"redhat-marketplace-nvhgf\" (UID: \"fd915f15-b87f-471f-94b7-cdeba3701dc6\") " pod="openshift-marketplace/redhat-marketplace-nvhgf" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.680472 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd915f15-b87f-471f-94b7-cdeba3701dc6-catalog-content\") pod \"redhat-marketplace-nvhgf\" (UID: \"fd915f15-b87f-471f-94b7-cdeba3701dc6\") " pod="openshift-marketplace/redhat-marketplace-nvhgf" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.724533 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zzkn\" (UniqueName: \"kubernetes.io/projected/fd915f15-b87f-471f-94b7-cdeba3701dc6-kube-api-access-7zzkn\") pod \"redhat-marketplace-nvhgf\" (UID: \"fd915f15-b87f-471f-94b7-cdeba3701dc6\") " pod="openshift-marketplace/redhat-marketplace-nvhgf" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.732731 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dr5gz"] Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.734240 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dr5gz" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.737896 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dr5gz"] Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.780937 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9hbm\" (UniqueName: \"kubernetes.io/projected/8bfa1475-8d86-4a0c-9864-79488c8832ab-kube-api-access-z9hbm\") pod \"redhat-operators-dr5gz\" (UID: \"8bfa1475-8d86-4a0c-9864-79488c8832ab\") " pod="openshift-marketplace/redhat-operators-dr5gz" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.781014 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bfa1475-8d86-4a0c-9864-79488c8832ab-catalog-content\") pod \"redhat-operators-dr5gz\" (UID: \"8bfa1475-8d86-4a0c-9864-79488c8832ab\") " pod="openshift-marketplace/redhat-operators-dr5gz" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.781049 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bfa1475-8d86-4a0c-9864-79488c8832ab-utilities\") pod \"redhat-operators-dr5gz\" (UID: \"8bfa1475-8d86-4a0c-9864-79488c8832ab\") " pod="openshift-marketplace/redhat-operators-dr5gz" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.840091 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jjx57"] Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.859897 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nvhgf" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.882821 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9hbm\" (UniqueName: \"kubernetes.io/projected/8bfa1475-8d86-4a0c-9864-79488c8832ab-kube-api-access-z9hbm\") pod \"redhat-operators-dr5gz\" (UID: \"8bfa1475-8d86-4a0c-9864-79488c8832ab\") " pod="openshift-marketplace/redhat-operators-dr5gz" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.883212 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bfa1475-8d86-4a0c-9864-79488c8832ab-catalog-content\") pod \"redhat-operators-dr5gz\" (UID: \"8bfa1475-8d86-4a0c-9864-79488c8832ab\") " pod="openshift-marketplace/redhat-operators-dr5gz" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.883237 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bfa1475-8d86-4a0c-9864-79488c8832ab-utilities\") pod \"redhat-operators-dr5gz\" (UID: \"8bfa1475-8d86-4a0c-9864-79488c8832ab\") " pod="openshift-marketplace/redhat-operators-dr5gz" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.883796 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bfa1475-8d86-4a0c-9864-79488c8832ab-utilities\") pod \"redhat-operators-dr5gz\" (UID: \"8bfa1475-8d86-4a0c-9864-79488c8832ab\") " pod="openshift-marketplace/redhat-operators-dr5gz" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.884141 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bfa1475-8d86-4a0c-9864-79488c8832ab-catalog-content\") pod \"redhat-operators-dr5gz\" (UID: \"8bfa1475-8d86-4a0c-9864-79488c8832ab\") " pod="openshift-marketplace/redhat-operators-dr5gz" Jan 26 12:58:26 crc kubenswrapper[4844]: I0126 12:58:26.903248 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9hbm\" (UniqueName: \"kubernetes.io/projected/8bfa1475-8d86-4a0c-9864-79488c8832ab-kube-api-access-z9hbm\") pod \"redhat-operators-dr5gz\" (UID: \"8bfa1475-8d86-4a0c-9864-79488c8832ab\") " pod="openshift-marketplace/redhat-operators-dr5gz" Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.051331 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nvhgf"] Jan 26 12:58:27 crc kubenswrapper[4844]: W0126 12:58:27.061271 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd915f15_b87f_471f_94b7_cdeba3701dc6.slice/crio-0b4a2075668571c08033f9972768b0fb3fdd13bf07daeae641308414f8a08ecd WatchSource:0}: Error finding container 0b4a2075668571c08033f9972768b0fb3fdd13bf07daeae641308414f8a08ecd: Status 404 returned error can't find the container with id 0b4a2075668571c08033f9972768b0fb3fdd13bf07daeae641308414f8a08ecd Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.096321 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dr5gz" Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.111655 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m8rzx"] Jan 26 12:58:27 crc kubenswrapper[4844]: W0126 12:58:27.124144 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cf02a58_0976_482c_9e29_b8cb52254a3b.slice/crio-42bab19b530e31c58b39fb90ad7b4b2994ffa1d6c3d0c9f08102990d26a1b101 WatchSource:0}: Error finding container 42bab19b530e31c58b39fb90ad7b4b2994ffa1d6c3d0c9f08102990d26a1b101: Status 404 returned error can't find the container with id 42bab19b530e31c58b39fb90ad7b4b2994ffa1d6c3d0c9f08102990d26a1b101 Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.321572 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dr5gz"] Jan 26 12:58:27 crc kubenswrapper[4844]: W0126 12:58:27.329988 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bfa1475_8d86_4a0c_9864_79488c8832ab.slice/crio-a7ae75913e608544040368fdcec86cebb979c54ab8b8b7f9816c5889adbef2d4 WatchSource:0}: Error finding container a7ae75913e608544040368fdcec86cebb979c54ab8b8b7f9816c5889adbef2d4: Status 404 returned error can't find the container with id a7ae75913e608544040368fdcec86cebb979c54ab8b8b7f9816c5889adbef2d4 Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.631143 4844 generic.go:334] "Generic (PLEG): container finished" podID="fd915f15-b87f-471f-94b7-cdeba3701dc6" containerID="7fb61af7bad3f3f44fe821fc43dc8d84af0e9238e9d43d0c6db8237e09f662b9" exitCode=0 Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.631293 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nvhgf" event={"ID":"fd915f15-b87f-471f-94b7-cdeba3701dc6","Type":"ContainerDied","Data":"7fb61af7bad3f3f44fe821fc43dc8d84af0e9238e9d43d0c6db8237e09f662b9"} Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.631371 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nvhgf" event={"ID":"fd915f15-b87f-471f-94b7-cdeba3701dc6","Type":"ContainerStarted","Data":"0b4a2075668571c08033f9972768b0fb3fdd13bf07daeae641308414f8a08ecd"} Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.632829 4844 generic.go:334] "Generic (PLEG): container finished" podID="8bfa1475-8d86-4a0c-9864-79488c8832ab" containerID="4af0d9bb48cbfa7864d834d4d688382578c6e72ba65533d701753106365cd72f" exitCode=0 Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.632889 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dr5gz" event={"ID":"8bfa1475-8d86-4a0c-9864-79488c8832ab","Type":"ContainerDied","Data":"4af0d9bb48cbfa7864d834d4d688382578c6e72ba65533d701753106365cd72f"} Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.632907 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dr5gz" event={"ID":"8bfa1475-8d86-4a0c-9864-79488c8832ab","Type":"ContainerStarted","Data":"a7ae75913e608544040368fdcec86cebb979c54ab8b8b7f9816c5889adbef2d4"} Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.636662 4844 generic.go:334] "Generic (PLEG): container finished" podID="740f9914-7e12-4cdc-b61f-4ce2f43a5e8d" containerID="11bd90e390235d47462ef1b21cbf3e34fd0cbb716a22e86cd3252e49ddb0a647" exitCode=0 Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.636711 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzgvx" event={"ID":"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d","Type":"ContainerDied","Data":"11bd90e390235d47462ef1b21cbf3e34fd0cbb716a22e86cd3252e49ddb0a647"} Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.643536 4844 generic.go:334] "Generic (PLEG): container finished" podID="a4779355-4fd0-4b1d-adef-3e4ebba15903" containerID="4441a3ac02b8ed1765d02b46a2bfead90d354d5c032f503df829c1a08956888f" exitCode=0 Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.643642 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjx57" event={"ID":"a4779355-4fd0-4b1d-adef-3e4ebba15903","Type":"ContainerDied","Data":"4441a3ac02b8ed1765d02b46a2bfead90d354d5c032f503df829c1a08956888f"} Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.643681 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjx57" event={"ID":"a4779355-4fd0-4b1d-adef-3e4ebba15903","Type":"ContainerStarted","Data":"83f0f46baac0e73c0b6152a479fe296569c43cefc5f39096d6e608e5727f3c86"} Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.657491 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz7n9" event={"ID":"4e419ec9-0814-4199-ae59-f47408ec961d","Type":"ContainerStarted","Data":"b915149cab43e1f1a54d90eb7c9f2350b8f8d429ca3c6647d372eb5c92f3aa18"} Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.660011 4844 generic.go:334] "Generic (PLEG): container finished" podID="9cf02a58-0976-482c-9e29-b8cb52254a3b" containerID="96252aac3b25e6356c8c70d9bd64e46c52b6168cf2d428d085468bbec2efd5c8" exitCode=0 Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.660480 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8rzx" event={"ID":"9cf02a58-0976-482c-9e29-b8cb52254a3b","Type":"ContainerDied","Data":"96252aac3b25e6356c8c70d9bd64e46c52b6168cf2d428d085468bbec2efd5c8"} Jan 26 12:58:27 crc kubenswrapper[4844]: I0126 12:58:27.660511 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8rzx" event={"ID":"9cf02a58-0976-482c-9e29-b8cb52254a3b","Type":"ContainerStarted","Data":"42bab19b530e31c58b39fb90ad7b4b2994ffa1d6c3d0c9f08102990d26a1b101"} Jan 26 12:58:28 crc kubenswrapper[4844]: I0126 12:58:28.666652 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nvhgf" event={"ID":"fd915f15-b87f-471f-94b7-cdeba3701dc6","Type":"ContainerStarted","Data":"305d0198e556fb3a541561bbf801a5ff0baeebb5bc8d22b22ea2d53752464590"} Jan 26 12:58:28 crc kubenswrapper[4844]: I0126 12:58:28.669184 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dr5gz" event={"ID":"8bfa1475-8d86-4a0c-9864-79488c8832ab","Type":"ContainerStarted","Data":"3ed645209c651ce8ede27ffdbe392194c5bb0e9c159873c2828881e53a1a9c3d"} Jan 26 12:58:28 crc kubenswrapper[4844]: I0126 12:58:28.674511 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzgvx" event={"ID":"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d","Type":"ContainerStarted","Data":"316814e6cb432d2a551ac8bde211984c33572728e3896781b4a41ae86b5bd231"} Jan 26 12:58:28 crc kubenswrapper[4844]: I0126 12:58:28.678234 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8rzx" event={"ID":"9cf02a58-0976-482c-9e29-b8cb52254a3b","Type":"ContainerStarted","Data":"5450ab61e8ba5c208c871bbc7b5b74ee0a59feeef62dc8c6dda14f7e72d013a2"} Jan 26 12:58:28 crc kubenswrapper[4844]: I0126 12:58:28.690059 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kz7n9" podStartSLOduration=3.222153896 podStartE2EDuration="5.690042769s" podCreationTimestamp="2026-01-26 12:58:23 +0000 UTC" firstStartedPulling="2026-01-26 12:58:24.563368768 +0000 UTC m=+881.496736390" lastFinishedPulling="2026-01-26 12:58:27.031257651 +0000 UTC m=+883.964625263" observedRunningTime="2026-01-26 12:58:27.770877906 +0000 UTC m=+884.704245528" watchObservedRunningTime="2026-01-26 12:58:28.690042769 +0000 UTC m=+885.623410381" Jan 26 12:58:28 crc kubenswrapper[4844]: I0126 12:58:28.748072 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nzgvx" podStartSLOduration=3.008291546 podStartE2EDuration="5.748054116s" podCreationTimestamp="2026-01-26 12:58:23 +0000 UTC" firstStartedPulling="2026-01-26 12:58:25.610726636 +0000 UTC m=+882.544094298" lastFinishedPulling="2026-01-26 12:58:28.350489256 +0000 UTC m=+885.283856868" observedRunningTime="2026-01-26 12:58:28.746220961 +0000 UTC m=+885.679588583" watchObservedRunningTime="2026-01-26 12:58:28.748054116 +0000 UTC m=+885.681421728" Jan 26 12:58:28 crc kubenswrapper[4844]: I0126 12:58:28.906198 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bnfk2"] Jan 26 12:58:28 crc kubenswrapper[4844]: I0126 12:58:28.907199 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bnfk2" Jan 26 12:58:28 crc kubenswrapper[4844]: I0126 12:58:28.928907 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bnfk2"] Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.016783 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be6c04bd-58fc-41e9-bdfa-facc3fc12358-catalog-content\") pod \"certified-operators-bnfk2\" (UID: \"be6c04bd-58fc-41e9-bdfa-facc3fc12358\") " pod="openshift-marketplace/certified-operators-bnfk2" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.016845 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be6c04bd-58fc-41e9-bdfa-facc3fc12358-utilities\") pod \"certified-operators-bnfk2\" (UID: \"be6c04bd-58fc-41e9-bdfa-facc3fc12358\") " pod="openshift-marketplace/certified-operators-bnfk2" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.016870 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftkts\" (UniqueName: \"kubernetes.io/projected/be6c04bd-58fc-41e9-bdfa-facc3fc12358-kube-api-access-ftkts\") pod \"certified-operators-bnfk2\" (UID: \"be6c04bd-58fc-41e9-bdfa-facc3fc12358\") " pod="openshift-marketplace/certified-operators-bnfk2" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.111662 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9ckcc"] Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.113405 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ckcc" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.117968 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be6c04bd-58fc-41e9-bdfa-facc3fc12358-utilities\") pod \"certified-operators-bnfk2\" (UID: \"be6c04bd-58fc-41e9-bdfa-facc3fc12358\") " pod="openshift-marketplace/certified-operators-bnfk2" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.118021 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftkts\" (UniqueName: \"kubernetes.io/projected/be6c04bd-58fc-41e9-bdfa-facc3fc12358-kube-api-access-ftkts\") pod \"certified-operators-bnfk2\" (UID: \"be6c04bd-58fc-41e9-bdfa-facc3fc12358\") " pod="openshift-marketplace/certified-operators-bnfk2" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.118105 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be6c04bd-58fc-41e9-bdfa-facc3fc12358-catalog-content\") pod \"certified-operators-bnfk2\" (UID: \"be6c04bd-58fc-41e9-bdfa-facc3fc12358\") " pod="openshift-marketplace/certified-operators-bnfk2" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.118525 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be6c04bd-58fc-41e9-bdfa-facc3fc12358-utilities\") pod \"certified-operators-bnfk2\" (UID: \"be6c04bd-58fc-41e9-bdfa-facc3fc12358\") " pod="openshift-marketplace/certified-operators-bnfk2" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.118697 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be6c04bd-58fc-41e9-bdfa-facc3fc12358-catalog-content\") pod \"certified-operators-bnfk2\" (UID: \"be6c04bd-58fc-41e9-bdfa-facc3fc12358\") " pod="openshift-marketplace/certified-operators-bnfk2" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.120540 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ckcc"] Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.154651 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftkts\" (UniqueName: \"kubernetes.io/projected/be6c04bd-58fc-41e9-bdfa-facc3fc12358-kube-api-access-ftkts\") pod \"certified-operators-bnfk2\" (UID: \"be6c04bd-58fc-41e9-bdfa-facc3fc12358\") " pod="openshift-marketplace/certified-operators-bnfk2" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.219175 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fab62d0-54ca-4d28-b84b-5c66d8bf0887-utilities\") pod \"community-operators-9ckcc\" (UID: \"5fab62d0-54ca-4d28-b84b-5c66d8bf0887\") " pod="openshift-marketplace/community-operators-9ckcc" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.219401 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fab62d0-54ca-4d28-b84b-5c66d8bf0887-catalog-content\") pod \"community-operators-9ckcc\" (UID: \"5fab62d0-54ca-4d28-b84b-5c66d8bf0887\") " pod="openshift-marketplace/community-operators-9ckcc" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.219521 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8q79\" (UniqueName: \"kubernetes.io/projected/5fab62d0-54ca-4d28-b84b-5c66d8bf0887-kube-api-access-k8q79\") pod \"community-operators-9ckcc\" (UID: \"5fab62d0-54ca-4d28-b84b-5c66d8bf0887\") " pod="openshift-marketplace/community-operators-9ckcc" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.221912 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bnfk2" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.329123 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fab62d0-54ca-4d28-b84b-5c66d8bf0887-utilities\") pod \"community-operators-9ckcc\" (UID: \"5fab62d0-54ca-4d28-b84b-5c66d8bf0887\") " pod="openshift-marketplace/community-operators-9ckcc" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.329225 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fab62d0-54ca-4d28-b84b-5c66d8bf0887-catalog-content\") pod \"community-operators-9ckcc\" (UID: \"5fab62d0-54ca-4d28-b84b-5c66d8bf0887\") " pod="openshift-marketplace/community-operators-9ckcc" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.329319 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8q79\" (UniqueName: \"kubernetes.io/projected/5fab62d0-54ca-4d28-b84b-5c66d8bf0887-kube-api-access-k8q79\") pod \"community-operators-9ckcc\" (UID: \"5fab62d0-54ca-4d28-b84b-5c66d8bf0887\") " pod="openshift-marketplace/community-operators-9ckcc" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.330204 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fab62d0-54ca-4d28-b84b-5c66d8bf0887-catalog-content\") pod \"community-operators-9ckcc\" (UID: \"5fab62d0-54ca-4d28-b84b-5c66d8bf0887\") " pod="openshift-marketplace/community-operators-9ckcc" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.332355 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fab62d0-54ca-4d28-b84b-5c66d8bf0887-utilities\") pod \"community-operators-9ckcc\" (UID: \"5fab62d0-54ca-4d28-b84b-5c66d8bf0887\") " pod="openshift-marketplace/community-operators-9ckcc" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.356929 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8q79\" (UniqueName: \"kubernetes.io/projected/5fab62d0-54ca-4d28-b84b-5c66d8bf0887-kube-api-access-k8q79\") pod \"community-operators-9ckcc\" (UID: \"5fab62d0-54ca-4d28-b84b-5c66d8bf0887\") " pod="openshift-marketplace/community-operators-9ckcc" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.428701 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ckcc" Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.646446 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bnfk2"] Jan 26 12:58:29 crc kubenswrapper[4844]: W0126 12:58:29.655078 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe6c04bd_58fc_41e9_bdfa_facc3fc12358.slice/crio-66f645ee561cf0240a78a03483bbfdd297704296532c07fb3c7735af1a300c22 WatchSource:0}: Error finding container 66f645ee561cf0240a78a03483bbfdd297704296532c07fb3c7735af1a300c22: Status 404 returned error can't find the container with id 66f645ee561cf0240a78a03483bbfdd297704296532c07fb3c7735af1a300c22 Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.655367 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ckcc"] Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.690320 4844 generic.go:334] "Generic (PLEG): container finished" podID="9cf02a58-0976-482c-9e29-b8cb52254a3b" containerID="5450ab61e8ba5c208c871bbc7b5b74ee0a59feeef62dc8c6dda14f7e72d013a2" exitCode=0 Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.690381 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8rzx" event={"ID":"9cf02a58-0976-482c-9e29-b8cb52254a3b","Type":"ContainerDied","Data":"5450ab61e8ba5c208c871bbc7b5b74ee0a59feeef62dc8c6dda14f7e72d013a2"} Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.696398 4844 generic.go:334] "Generic (PLEG): container finished" podID="fd915f15-b87f-471f-94b7-cdeba3701dc6" containerID="305d0198e556fb3a541561bbf801a5ff0baeebb5bc8d22b22ea2d53752464590" exitCode=0 Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.696462 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nvhgf" event={"ID":"fd915f15-b87f-471f-94b7-cdeba3701dc6","Type":"ContainerDied","Data":"305d0198e556fb3a541561bbf801a5ff0baeebb5bc8d22b22ea2d53752464590"} Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.701874 4844 generic.go:334] "Generic (PLEG): container finished" podID="8bfa1475-8d86-4a0c-9864-79488c8832ab" containerID="3ed645209c651ce8ede27ffdbe392194c5bb0e9c159873c2828881e53a1a9c3d" exitCode=0 Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.701982 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dr5gz" event={"ID":"8bfa1475-8d86-4a0c-9864-79488c8832ab","Type":"ContainerDied","Data":"3ed645209c651ce8ede27ffdbe392194c5bb0e9c159873c2828881e53a1a9c3d"} Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.703858 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ckcc" event={"ID":"5fab62d0-54ca-4d28-b84b-5c66d8bf0887","Type":"ContainerStarted","Data":"ec9bed6aa26106ce0ef0f92acb78ffcddf81ee6667d0b6bef0b75477acb686b6"} Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.710017 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bnfk2" event={"ID":"be6c04bd-58fc-41e9-bdfa-facc3fc12358","Type":"ContainerStarted","Data":"66f645ee561cf0240a78a03483bbfdd297704296532c07fb3c7735af1a300c22"} Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.713669 4844 generic.go:334] "Generic (PLEG): container finished" podID="a4779355-4fd0-4b1d-adef-3e4ebba15903" containerID="464a82c3ca001aa069a1ca12c3687ec124a9c12f32c4854f589aafb80dba271c" exitCode=0 Jan 26 12:58:29 crc kubenswrapper[4844]: I0126 12:58:29.714499 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjx57" event={"ID":"a4779355-4fd0-4b1d-adef-3e4ebba15903","Type":"ContainerDied","Data":"464a82c3ca001aa069a1ca12c3687ec124a9c12f32c4854f589aafb80dba271c"} Jan 26 12:58:30 crc kubenswrapper[4844]: I0126 12:58:30.721101 4844 generic.go:334] "Generic (PLEG): container finished" podID="be6c04bd-58fc-41e9-bdfa-facc3fc12358" containerID="8905ad7a0fa6d900cd739234d4dde7fa672b71f51072f457774971daf2c3ec48" exitCode=0 Jan 26 12:58:30 crc kubenswrapper[4844]: I0126 12:58:30.721408 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bnfk2" event={"ID":"be6c04bd-58fc-41e9-bdfa-facc3fc12358","Type":"ContainerDied","Data":"8905ad7a0fa6d900cd739234d4dde7fa672b71f51072f457774971daf2c3ec48"} Jan 26 12:58:30 crc kubenswrapper[4844]: I0126 12:58:30.726320 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjx57" event={"ID":"a4779355-4fd0-4b1d-adef-3e4ebba15903","Type":"ContainerStarted","Data":"6d0446d47f270b01b58011abd8132202acfe9c1621190c65887429fb8c3c5bff"} Jan 26 12:58:30 crc kubenswrapper[4844]: I0126 12:58:30.728659 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8rzx" event={"ID":"9cf02a58-0976-482c-9e29-b8cb52254a3b","Type":"ContainerStarted","Data":"087391cd6c4869b49eeb45ca495ee9665b4ada60fd9e6ed0f3f39fdc9dcc6bb6"} Jan 26 12:58:30 crc kubenswrapper[4844]: I0126 12:58:30.731200 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nvhgf" event={"ID":"fd915f15-b87f-471f-94b7-cdeba3701dc6","Type":"ContainerStarted","Data":"b7b32950588d635a2cabf7ee2bdb7994abce1869b64bb7b2643e3e530cf28fe1"} Jan 26 12:58:30 crc kubenswrapper[4844]: I0126 12:58:30.733628 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dr5gz" event={"ID":"8bfa1475-8d86-4a0c-9864-79488c8832ab","Type":"ContainerStarted","Data":"92a34f333a7d8e4e7540948683c81b14ed20e254b273a5d5d9be59088bee1f6e"} Jan 26 12:58:30 crc kubenswrapper[4844]: I0126 12:58:30.734748 4844 generic.go:334] "Generic (PLEG): container finished" podID="5fab62d0-54ca-4d28-b84b-5c66d8bf0887" containerID="2243927c5402217fec72a2e89ffaa122089e7ec8541184fb8ea5d2e76dcfeec9" exitCode=0 Jan 26 12:58:30 crc kubenswrapper[4844]: I0126 12:58:30.734778 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ckcc" event={"ID":"5fab62d0-54ca-4d28-b84b-5c66d8bf0887","Type":"ContainerDied","Data":"2243927c5402217fec72a2e89ffaa122089e7ec8541184fb8ea5d2e76dcfeec9"} Jan 26 12:58:30 crc kubenswrapper[4844]: I0126 12:58:30.763779 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m8rzx" podStartSLOduration=2.270486703 podStartE2EDuration="4.763760196s" podCreationTimestamp="2026-01-26 12:58:26 +0000 UTC" firstStartedPulling="2026-01-26 12:58:27.662024338 +0000 UTC m=+884.595391960" lastFinishedPulling="2026-01-26 12:58:30.155297841 +0000 UTC m=+887.088665453" observedRunningTime="2026-01-26 12:58:30.760740202 +0000 UTC m=+887.694107814" watchObservedRunningTime="2026-01-26 12:58:30.763760196 +0000 UTC m=+887.697127808" Jan 26 12:58:30 crc kubenswrapper[4844]: I0126 12:58:30.797718 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nvhgf" podStartSLOduration=2.32681064 podStartE2EDuration="4.797694437s" podCreationTimestamp="2026-01-26 12:58:26 +0000 UTC" firstStartedPulling="2026-01-26 12:58:27.63300179 +0000 UTC m=+884.566369412" lastFinishedPulling="2026-01-26 12:58:30.103885587 +0000 UTC m=+887.037253209" observedRunningTime="2026-01-26 12:58:30.796666511 +0000 UTC m=+887.730034133" watchObservedRunningTime="2026-01-26 12:58:30.797694437 +0000 UTC m=+887.731062049" Jan 26 12:58:30 crc kubenswrapper[4844]: I0126 12:58:30.801658 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jjx57" podStartSLOduration=2.242729186 podStartE2EDuration="4.801650625s" podCreationTimestamp="2026-01-26 12:58:26 +0000 UTC" firstStartedPulling="2026-01-26 12:58:27.64632606 +0000 UTC m=+884.579693692" lastFinishedPulling="2026-01-26 12:58:30.205247509 +0000 UTC m=+887.138615131" observedRunningTime="2026-01-26 12:58:30.782117061 +0000 UTC m=+887.715484673" watchObservedRunningTime="2026-01-26 12:58:30.801650625 +0000 UTC m=+887.735018227" Jan 26 12:58:30 crc kubenswrapper[4844]: I0126 12:58:30.865135 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dr5gz" podStartSLOduration=2.381953786 podStartE2EDuration="4.865109517s" podCreationTimestamp="2026-01-26 12:58:26 +0000 UTC" firstStartedPulling="2026-01-26 12:58:27.634523748 +0000 UTC m=+884.567891360" lastFinishedPulling="2026-01-26 12:58:30.117679469 +0000 UTC m=+887.051047091" observedRunningTime="2026-01-26 12:58:30.861395475 +0000 UTC m=+887.794763087" watchObservedRunningTime="2026-01-26 12:58:30.865109517 +0000 UTC m=+887.798477129" Jan 26 12:58:32 crc kubenswrapper[4844]: I0126 12:58:32.749711 4844 generic.go:334] "Generic (PLEG): container finished" podID="5fab62d0-54ca-4d28-b84b-5c66d8bf0887" containerID="5c6bf7055bac86405e5a127c7a47511cd0775fcc879d7b43515246428e23d634" exitCode=0 Jan 26 12:58:32 crc kubenswrapper[4844]: I0126 12:58:32.749834 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ckcc" event={"ID":"5fab62d0-54ca-4d28-b84b-5c66d8bf0887","Type":"ContainerDied","Data":"5c6bf7055bac86405e5a127c7a47511cd0775fcc879d7b43515246428e23d634"} Jan 26 12:58:32 crc kubenswrapper[4844]: I0126 12:58:32.754584 4844 generic.go:334] "Generic (PLEG): container finished" podID="be6c04bd-58fc-41e9-bdfa-facc3fc12358" containerID="24d40f4026be2463300ef27f3d58daf4823b12546ee00f7d4e7eac62a8d5db27" exitCode=0 Jan 26 12:58:32 crc kubenswrapper[4844]: I0126 12:58:32.754664 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bnfk2" event={"ID":"be6c04bd-58fc-41e9-bdfa-facc3fc12358","Type":"ContainerDied","Data":"24d40f4026be2463300ef27f3d58daf4823b12546ee00f7d4e7eac62a8d5db27"} Jan 26 12:58:33 crc kubenswrapper[4844]: I0126 12:58:33.774180 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bnfk2" event={"ID":"be6c04bd-58fc-41e9-bdfa-facc3fc12358","Type":"ContainerStarted","Data":"a198db236bb8188785b7f608c34862e038684594008ae76aab6dba61f984b3ce"} Jan 26 12:58:33 crc kubenswrapper[4844]: I0126 12:58:33.777462 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ckcc" event={"ID":"5fab62d0-54ca-4d28-b84b-5c66d8bf0887","Type":"ContainerStarted","Data":"bb0d8c8d0c798c6cc19cffe138ce8b3b364ffbc4ae876040592c691070c0ebfa"} Jan 26 12:58:33 crc kubenswrapper[4844]: I0126 12:58:33.803064 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bnfk2" podStartSLOduration=3.308789778 podStartE2EDuration="5.803042165s" podCreationTimestamp="2026-01-26 12:58:28 +0000 UTC" firstStartedPulling="2026-01-26 12:58:30.722609226 +0000 UTC m=+887.655976848" lastFinishedPulling="2026-01-26 12:58:33.216861623 +0000 UTC m=+890.150229235" observedRunningTime="2026-01-26 12:58:33.799368115 +0000 UTC m=+890.732735737" watchObservedRunningTime="2026-01-26 12:58:33.803042165 +0000 UTC m=+890.736409787" Jan 26 12:58:33 crc kubenswrapper[4844]: I0126 12:58:33.821422 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9ckcc" podStartSLOduration=2.418302523 podStartE2EDuration="4.821404801s" podCreationTimestamp="2026-01-26 12:58:29 +0000 UTC" firstStartedPulling="2026-01-26 12:58:30.735733712 +0000 UTC m=+887.669101314" lastFinishedPulling="2026-01-26 12:58:33.13883594 +0000 UTC m=+890.072203592" observedRunningTime="2026-01-26 12:58:33.817371231 +0000 UTC m=+890.750738853" watchObservedRunningTime="2026-01-26 12:58:33.821404801 +0000 UTC m=+890.754772413" Jan 26 12:58:33 crc kubenswrapper[4844]: I0126 12:58:33.968983 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kz7n9" Jan 26 12:58:33 crc kubenswrapper[4844]: I0126 12:58:33.970445 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kz7n9" Jan 26 12:58:34 crc kubenswrapper[4844]: I0126 12:58:34.040698 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kz7n9" Jan 26 12:58:34 crc kubenswrapper[4844]: I0126 12:58:34.293634 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nzgvx" Jan 26 12:58:34 crc kubenswrapper[4844]: I0126 12:58:34.293688 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nzgvx" Jan 26 12:58:34 crc kubenswrapper[4844]: I0126 12:58:34.342647 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nzgvx" Jan 26 12:58:34 crc kubenswrapper[4844]: I0126 12:58:34.818370 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kz7n9" Jan 26 12:58:34 crc kubenswrapper[4844]: I0126 12:58:34.831591 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nzgvx" Jan 26 12:58:36 crc kubenswrapper[4844]: I0126 12:58:36.433982 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jjx57" Jan 26 12:58:36 crc kubenswrapper[4844]: I0126 12:58:36.434060 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jjx57" Jan 26 12:58:36 crc kubenswrapper[4844]: I0126 12:58:36.491408 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jjx57" Jan 26 12:58:36 crc kubenswrapper[4844]: I0126 12:58:36.670943 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m8rzx" Jan 26 12:58:36 crc kubenswrapper[4844]: I0126 12:58:36.671029 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m8rzx" Jan 26 12:58:36 crc kubenswrapper[4844]: I0126 12:58:36.860436 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nvhgf" Jan 26 12:58:36 crc kubenswrapper[4844]: I0126 12:58:36.860494 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nvhgf" Jan 26 12:58:36 crc kubenswrapper[4844]: I0126 12:58:36.862592 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jjx57" Jan 26 12:58:36 crc kubenswrapper[4844]: I0126 12:58:36.955590 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nvhgf" Jan 26 12:58:37 crc kubenswrapper[4844]: I0126 12:58:37.097455 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dr5gz" Jan 26 12:58:37 crc kubenswrapper[4844]: I0126 12:58:37.097547 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dr5gz" Jan 26 12:58:37 crc kubenswrapper[4844]: I0126 12:58:37.144626 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dr5gz" Jan 26 12:58:37 crc kubenswrapper[4844]: I0126 12:58:37.305764 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kz7n9"] Jan 26 12:58:37 crc kubenswrapper[4844]: I0126 12:58:37.509052 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nzgvx"] Jan 26 12:58:37 crc kubenswrapper[4844]: I0126 12:58:37.509509 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nzgvx" podUID="740f9914-7e12-4cdc-b61f-4ce2f43a5e8d" containerName="registry-server" containerID="cri-o://316814e6cb432d2a551ac8bde211984c33572728e3896781b4a41ae86b5bd231" gracePeriod=2 Jan 26 12:58:37 crc kubenswrapper[4844]: I0126 12:58:37.735918 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m8rzx" podUID="9cf02a58-0976-482c-9e29-b8cb52254a3b" containerName="registry-server" probeResult="failure" output=< Jan 26 12:58:37 crc kubenswrapper[4844]: timeout: failed to connect service ":50051" within 1s Jan 26 12:58:37 crc kubenswrapper[4844]: > Jan 26 12:58:37 crc kubenswrapper[4844]: I0126 12:58:37.802239 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kz7n9" podUID="4e419ec9-0814-4199-ae59-f47408ec961d" containerName="registry-server" containerID="cri-o://b915149cab43e1f1a54d90eb7c9f2350b8f8d429ca3c6647d372eb5c92f3aa18" gracePeriod=2 Jan 26 12:58:37 crc kubenswrapper[4844]: I0126 12:58:37.847208 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nvhgf" Jan 26 12:58:37 crc kubenswrapper[4844]: I0126 12:58:37.865151 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dr5gz" Jan 26 12:58:39 crc kubenswrapper[4844]: I0126 12:58:39.222931 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bnfk2" Jan 26 12:58:39 crc kubenswrapper[4844]: I0126 12:58:39.222996 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bnfk2" Jan 26 12:58:39 crc kubenswrapper[4844]: I0126 12:58:39.291224 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bnfk2" Jan 26 12:58:39 crc kubenswrapper[4844]: I0126 12:58:39.429707 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9ckcc" Jan 26 12:58:39 crc kubenswrapper[4844]: I0126 12:58:39.429783 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9ckcc" Jan 26 12:58:39 crc kubenswrapper[4844]: I0126 12:58:39.494703 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9ckcc" Jan 26 12:58:39 crc kubenswrapper[4844]: I0126 12:58:39.708352 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nvhgf"] Jan 26 12:58:39 crc kubenswrapper[4844]: I0126 12:58:39.827542 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nvhgf" podUID="fd915f15-b87f-471f-94b7-cdeba3701dc6" containerName="registry-server" containerID="cri-o://b7b32950588d635a2cabf7ee2bdb7994abce1869b64bb7b2643e3e530cf28fe1" gracePeriod=2 Jan 26 12:58:39 crc kubenswrapper[4844]: I0126 12:58:39.871229 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9ckcc" Jan 26 12:58:39 crc kubenswrapper[4844]: I0126 12:58:39.906195 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bnfk2" Jan 26 12:58:39 crc kubenswrapper[4844]: I0126 12:58:39.906313 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dr5gz"] Jan 26 12:58:39 crc kubenswrapper[4844]: I0126 12:58:39.906571 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dr5gz" podUID="8bfa1475-8d86-4a0c-9864-79488c8832ab" containerName="registry-server" containerID="cri-o://92a34f333a7d8e4e7540948683c81b14ed20e254b273a5d5d9be59088bee1f6e" gracePeriod=2 Jan 26 12:58:40 crc kubenswrapper[4844]: I0126 12:58:40.836928 4844 generic.go:334] "Generic (PLEG): container finished" podID="740f9914-7e12-4cdc-b61f-4ce2f43a5e8d" containerID="316814e6cb432d2a551ac8bde211984c33572728e3896781b4a41ae86b5bd231" exitCode=0 Jan 26 12:58:40 crc kubenswrapper[4844]: I0126 12:58:40.837018 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzgvx" event={"ID":"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d","Type":"ContainerDied","Data":"316814e6cb432d2a551ac8bde211984c33572728e3896781b4a41ae86b5bd231"} Jan 26 12:58:40 crc kubenswrapper[4844]: I0126 12:58:40.840466 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kz7n9_4e419ec9-0814-4199-ae59-f47408ec961d/registry-server/0.log" Jan 26 12:58:40 crc kubenswrapper[4844]: I0126 12:58:40.841503 4844 generic.go:334] "Generic (PLEG): container finished" podID="4e419ec9-0814-4199-ae59-f47408ec961d" containerID="b915149cab43e1f1a54d90eb7c9f2350b8f8d429ca3c6647d372eb5c92f3aa18" exitCode=137 Jan 26 12:58:40 crc kubenswrapper[4844]: I0126 12:58:40.842474 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz7n9" event={"ID":"4e419ec9-0814-4199-ae59-f47408ec961d","Type":"ContainerDied","Data":"b915149cab43e1f1a54d90eb7c9f2350b8f8d429ca3c6647d372eb5c92f3aa18"} Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.260769 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kz7n9_4e419ec9-0814-4199-ae59-f47408ec961d/registry-server/0.log" Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.261579 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz7n9" Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.326120 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e419ec9-0814-4199-ae59-f47408ec961d-catalog-content\") pod \"4e419ec9-0814-4199-ae59-f47408ec961d\" (UID: \"4e419ec9-0814-4199-ae59-f47408ec961d\") " Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.326292 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e419ec9-0814-4199-ae59-f47408ec961d-utilities\") pod \"4e419ec9-0814-4199-ae59-f47408ec961d\" (UID: \"4e419ec9-0814-4199-ae59-f47408ec961d\") " Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.326343 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vknh\" (UniqueName: \"kubernetes.io/projected/4e419ec9-0814-4199-ae59-f47408ec961d-kube-api-access-9vknh\") pod \"4e419ec9-0814-4199-ae59-f47408ec961d\" (UID: \"4e419ec9-0814-4199-ae59-f47408ec961d\") " Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.327101 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e419ec9-0814-4199-ae59-f47408ec961d-utilities" (OuterVolumeSpecName: "utilities") pod "4e419ec9-0814-4199-ae59-f47408ec961d" (UID: "4e419ec9-0814-4199-ae59-f47408ec961d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.333241 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e419ec9-0814-4199-ae59-f47408ec961d-kube-api-access-9vknh" (OuterVolumeSpecName: "kube-api-access-9vknh") pod "4e419ec9-0814-4199-ae59-f47408ec961d" (UID: "4e419ec9-0814-4199-ae59-f47408ec961d"). InnerVolumeSpecName "kube-api-access-9vknh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.375484 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e419ec9-0814-4199-ae59-f47408ec961d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e419ec9-0814-4199-ae59-f47408ec961d" (UID: "4e419ec9-0814-4199-ae59-f47408ec961d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.427912 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vknh\" (UniqueName: \"kubernetes.io/projected/4e419ec9-0814-4199-ae59-f47408ec961d-kube-api-access-9vknh\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.427941 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e419ec9-0814-4199-ae59-f47408ec961d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.427951 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e419ec9-0814-4199-ae59-f47408ec961d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.853317 4844 generic.go:334] "Generic (PLEG): container finished" podID="8bfa1475-8d86-4a0c-9864-79488c8832ab" containerID="92a34f333a7d8e4e7540948683c81b14ed20e254b273a5d5d9be59088bee1f6e" exitCode=0 Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.853731 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dr5gz" event={"ID":"8bfa1475-8d86-4a0c-9864-79488c8832ab","Type":"ContainerDied","Data":"92a34f333a7d8e4e7540948683c81b14ed20e254b273a5d5d9be59088bee1f6e"} Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.856245 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kz7n9_4e419ec9-0814-4199-ae59-f47408ec961d/registry-server/0.log" Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.857693 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz7n9" event={"ID":"4e419ec9-0814-4199-ae59-f47408ec961d","Type":"ContainerDied","Data":"2d19bacbad0295ea25b547f56f00b71e00b1371c11d748e33257560b443783a8"} Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.857830 4844 scope.go:117] "RemoveContainer" containerID="b915149cab43e1f1a54d90eb7c9f2350b8f8d429ca3c6647d372eb5c92f3aa18" Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.858050 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz7n9" Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.869452 4844 generic.go:334] "Generic (PLEG): container finished" podID="fd915f15-b87f-471f-94b7-cdeba3701dc6" containerID="b7b32950588d635a2cabf7ee2bdb7994abce1869b64bb7b2643e3e530cf28fe1" exitCode=0 Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.869494 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nvhgf" event={"ID":"fd915f15-b87f-471f-94b7-cdeba3701dc6","Type":"ContainerDied","Data":"b7b32950588d635a2cabf7ee2bdb7994abce1869b64bb7b2643e3e530cf28fe1"} Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.913188 4844 scope.go:117] "RemoveContainer" containerID="9018dcd354fb99a9545b458e2e0a6c93f094418ad2c7a46e7ef6ae2ffcab62dd" Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.920907 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kz7n9"] Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.925214 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kz7n9"] Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.930438 4844 scope.go:117] "RemoveContainer" containerID="f4421ef7e97a16948b68b33037403dc73fb8c6f5dc548976faf3c1148a7c3a18" Jan 26 12:58:41 crc kubenswrapper[4844]: I0126 12:58:41.982418 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nvhgf" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.036321 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zzkn\" (UniqueName: \"kubernetes.io/projected/fd915f15-b87f-471f-94b7-cdeba3701dc6-kube-api-access-7zzkn\") pod \"fd915f15-b87f-471f-94b7-cdeba3701dc6\" (UID: \"fd915f15-b87f-471f-94b7-cdeba3701dc6\") " Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.036419 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd915f15-b87f-471f-94b7-cdeba3701dc6-utilities\") pod \"fd915f15-b87f-471f-94b7-cdeba3701dc6\" (UID: \"fd915f15-b87f-471f-94b7-cdeba3701dc6\") " Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.036458 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd915f15-b87f-471f-94b7-cdeba3701dc6-catalog-content\") pod \"fd915f15-b87f-471f-94b7-cdeba3701dc6\" (UID: \"fd915f15-b87f-471f-94b7-cdeba3701dc6\") " Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.037718 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd915f15-b87f-471f-94b7-cdeba3701dc6-utilities" (OuterVolumeSpecName: "utilities") pod "fd915f15-b87f-471f-94b7-cdeba3701dc6" (UID: "fd915f15-b87f-471f-94b7-cdeba3701dc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.040554 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd915f15-b87f-471f-94b7-cdeba3701dc6-kube-api-access-7zzkn" (OuterVolumeSpecName: "kube-api-access-7zzkn") pod "fd915f15-b87f-471f-94b7-cdeba3701dc6" (UID: "fd915f15-b87f-471f-94b7-cdeba3701dc6"). InnerVolumeSpecName "kube-api-access-7zzkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.060907 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd915f15-b87f-471f-94b7-cdeba3701dc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd915f15-b87f-471f-94b7-cdeba3701dc6" (UID: "fd915f15-b87f-471f-94b7-cdeba3701dc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.103014 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dr5gz" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.124486 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzgvx" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.137159 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bfa1475-8d86-4a0c-9864-79488c8832ab-utilities\") pod \"8bfa1475-8d86-4a0c-9864-79488c8832ab\" (UID: \"8bfa1475-8d86-4a0c-9864-79488c8832ab\") " Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.137252 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9hbm\" (UniqueName: \"kubernetes.io/projected/8bfa1475-8d86-4a0c-9864-79488c8832ab-kube-api-access-z9hbm\") pod \"8bfa1475-8d86-4a0c-9864-79488c8832ab\" (UID: \"8bfa1475-8d86-4a0c-9864-79488c8832ab\") " Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.137302 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bfa1475-8d86-4a0c-9864-79488c8832ab-catalog-content\") pod \"8bfa1475-8d86-4a0c-9864-79488c8832ab\" (UID: \"8bfa1475-8d86-4a0c-9864-79488c8832ab\") " Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.137620 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zzkn\" (UniqueName: \"kubernetes.io/projected/fd915f15-b87f-471f-94b7-cdeba3701dc6-kube-api-access-7zzkn\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.137637 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd915f15-b87f-471f-94b7-cdeba3701dc6-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.137647 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd915f15-b87f-471f-94b7-cdeba3701dc6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.138016 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bfa1475-8d86-4a0c-9864-79488c8832ab-utilities" (OuterVolumeSpecName: "utilities") pod "8bfa1475-8d86-4a0c-9864-79488c8832ab" (UID: "8bfa1475-8d86-4a0c-9864-79488c8832ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.141983 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bfa1475-8d86-4a0c-9864-79488c8832ab-kube-api-access-z9hbm" (OuterVolumeSpecName: "kube-api-access-z9hbm") pod "8bfa1475-8d86-4a0c-9864-79488c8832ab" (UID: "8bfa1475-8d86-4a0c-9864-79488c8832ab"). InnerVolumeSpecName "kube-api-access-z9hbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.238628 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d-catalog-content\") pod \"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d\" (UID: \"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d\") " Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.238708 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9s5n\" (UniqueName: \"kubernetes.io/projected/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d-kube-api-access-j9s5n\") pod \"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d\" (UID: \"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d\") " Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.238749 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d-utilities\") pod \"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d\" (UID: \"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d\") " Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.238949 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bfa1475-8d86-4a0c-9864-79488c8832ab-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.238966 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9hbm\" (UniqueName: \"kubernetes.io/projected/8bfa1475-8d86-4a0c-9864-79488c8832ab-kube-api-access-z9hbm\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.239549 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d-utilities" (OuterVolumeSpecName: "utilities") pod "740f9914-7e12-4cdc-b61f-4ce2f43a5e8d" (UID: "740f9914-7e12-4cdc-b61f-4ce2f43a5e8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.241696 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d-kube-api-access-j9s5n" (OuterVolumeSpecName: "kube-api-access-j9s5n") pod "740f9914-7e12-4cdc-b61f-4ce2f43a5e8d" (UID: "740f9914-7e12-4cdc-b61f-4ce2f43a5e8d"). InnerVolumeSpecName "kube-api-access-j9s5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.289888 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "740f9914-7e12-4cdc-b61f-4ce2f43a5e8d" (UID: "740f9914-7e12-4cdc-b61f-4ce2f43a5e8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.340422 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.340452 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.340465 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9s5n\" (UniqueName: \"kubernetes.io/projected/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d-kube-api-access-j9s5n\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.880148 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nvhgf" event={"ID":"fd915f15-b87f-471f-94b7-cdeba3701dc6","Type":"ContainerDied","Data":"0b4a2075668571c08033f9972768b0fb3fdd13bf07daeae641308414f8a08ecd"} Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.880231 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nvhgf" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.880327 4844 scope.go:117] "RemoveContainer" containerID="b7b32950588d635a2cabf7ee2bdb7994abce1869b64bb7b2643e3e530cf28fe1" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.884545 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dr5gz" event={"ID":"8bfa1475-8d86-4a0c-9864-79488c8832ab","Type":"ContainerDied","Data":"a7ae75913e608544040368fdcec86cebb979c54ab8b8b7f9816c5889adbef2d4"} Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.884569 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dr5gz" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.887409 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzgvx" event={"ID":"740f9914-7e12-4cdc-b61f-4ce2f43a5e8d","Type":"ContainerDied","Data":"596ad8f3dc8df3ca8e5c65e8ed5dafde58f6b4ee48c829764766e6a4da046663"} Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.887502 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzgvx" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.899154 4844 scope.go:117] "RemoveContainer" containerID="305d0198e556fb3a541561bbf801a5ff0baeebb5bc8d22b22ea2d53752464590" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.919684 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nzgvx"] Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.923053 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nzgvx"] Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.931747 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nvhgf"] Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.934712 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nvhgf"] Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.949236 4844 scope.go:117] "RemoveContainer" containerID="7fb61af7bad3f3f44fe821fc43dc8d84af0e9238e9d43d0c6db8237e09f662b9" Jan 26 12:58:42 crc kubenswrapper[4844]: I0126 12:58:42.969023 4844 scope.go:117] "RemoveContainer" containerID="92a34f333a7d8e4e7540948683c81b14ed20e254b273a5d5d9be59088bee1f6e" Jan 26 12:58:43 crc kubenswrapper[4844]: I0126 12:58:43.000257 4844 scope.go:117] "RemoveContainer" containerID="3ed645209c651ce8ede27ffdbe392194c5bb0e9c159873c2828881e53a1a9c3d" Jan 26 12:58:43 crc kubenswrapper[4844]: I0126 12:58:43.017110 4844 scope.go:117] "RemoveContainer" containerID="4af0d9bb48cbfa7864d834d4d688382578c6e72ba65533d701753106365cd72f" Jan 26 12:58:43 crc kubenswrapper[4844]: I0126 12:58:43.037288 4844 scope.go:117] "RemoveContainer" containerID="316814e6cb432d2a551ac8bde211984c33572728e3896781b4a41ae86b5bd231" Jan 26 12:58:43 crc kubenswrapper[4844]: I0126 12:58:43.054355 4844 scope.go:117] "RemoveContainer" containerID="11bd90e390235d47462ef1b21cbf3e34fd0cbb716a22e86cd3252e49ddb0a647" Jan 26 12:58:43 crc kubenswrapper[4844]: I0126 12:58:43.070237 4844 scope.go:117] "RemoveContainer" containerID="5a4f01df13129d9a68917c11324c2291f2e6b0521af1f08de8050dcdd2669327" Jan 26 12:58:43 crc kubenswrapper[4844]: I0126 12:58:43.326246 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e419ec9-0814-4199-ae59-f47408ec961d" path="/var/lib/kubelet/pods/4e419ec9-0814-4199-ae59-f47408ec961d/volumes" Jan 26 12:58:43 crc kubenswrapper[4844]: I0126 12:58:43.328286 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="740f9914-7e12-4cdc-b61f-4ce2f43a5e8d" path="/var/lib/kubelet/pods/740f9914-7e12-4cdc-b61f-4ce2f43a5e8d/volumes" Jan 26 12:58:43 crc kubenswrapper[4844]: I0126 12:58:43.330193 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd915f15-b87f-471f-94b7-cdeba3701dc6" path="/var/lib/kubelet/pods/fd915f15-b87f-471f-94b7-cdeba3701dc6/volumes" Jan 26 12:58:44 crc kubenswrapper[4844]: I0126 12:58:44.403164 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bfa1475-8d86-4a0c-9864-79488c8832ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bfa1475-8d86-4a0c-9864-79488c8832ab" (UID: "8bfa1475-8d86-4a0c-9864-79488c8832ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:44 crc kubenswrapper[4844]: I0126 12:58:44.471042 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bfa1475-8d86-4a0c-9864-79488c8832ab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:44 crc kubenswrapper[4844]: I0126 12:58:44.718147 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dr5gz"] Jan 26 12:58:44 crc kubenswrapper[4844]: I0126 12:58:44.723160 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dr5gz"] Jan 26 12:58:45 crc kubenswrapper[4844]: I0126 12:58:45.327737 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bfa1475-8d86-4a0c-9864-79488c8832ab" path="/var/lib/kubelet/pods/8bfa1475-8d86-4a0c-9864-79488c8832ab/volumes" Jan 26 12:58:46 crc kubenswrapper[4844]: I0126 12:58:46.723193 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m8rzx" Jan 26 12:58:46 crc kubenswrapper[4844]: I0126 12:58:46.785159 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m8rzx" Jan 26 12:58:47 crc kubenswrapper[4844]: I0126 12:58:47.686068 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" podUID="e17e004d-fb45-4c4f-896f-6f650a0f7379" containerName="registry" containerID="cri-o://fde35df5fc2ed9d745bb4b922f6db22da8295b8ae1cab805f9aaa3d69cba6f1a" gracePeriod=30 Jan 26 12:58:48 crc kubenswrapper[4844]: I0126 12:58:48.930507 4844 generic.go:334] "Generic (PLEG): container finished" podID="e17e004d-fb45-4c4f-896f-6f650a0f7379" containerID="fde35df5fc2ed9d745bb4b922f6db22da8295b8ae1cab805f9aaa3d69cba6f1a" exitCode=0 Jan 26 12:58:48 crc kubenswrapper[4844]: I0126 12:58:48.930583 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" event={"ID":"e17e004d-fb45-4c4f-896f-6f650a0f7379","Type":"ContainerDied","Data":"fde35df5fc2ed9d745bb4b922f6db22da8295b8ae1cab805f9aaa3d69cba6f1a"} Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.188834 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.244515 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"e17e004d-fb45-4c4f-896f-6f650a0f7379\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.244624 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e17e004d-fb45-4c4f-896f-6f650a0f7379-ca-trust-extracted\") pod \"e17e004d-fb45-4c4f-896f-6f650a0f7379\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.244687 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e17e004d-fb45-4c4f-896f-6f650a0f7379-installation-pull-secrets\") pod \"e17e004d-fb45-4c4f-896f-6f650a0f7379\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.244738 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e17e004d-fb45-4c4f-896f-6f650a0f7379-bound-sa-token\") pod \"e17e004d-fb45-4c4f-896f-6f650a0f7379\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.244768 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwq8k\" (UniqueName: \"kubernetes.io/projected/e17e004d-fb45-4c4f-896f-6f650a0f7379-kube-api-access-wwq8k\") pod \"e17e004d-fb45-4c4f-896f-6f650a0f7379\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.244831 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e17e004d-fb45-4c4f-896f-6f650a0f7379-registry-tls\") pod \"e17e004d-fb45-4c4f-896f-6f650a0f7379\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.244865 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e17e004d-fb45-4c4f-896f-6f650a0f7379-trusted-ca\") pod \"e17e004d-fb45-4c4f-896f-6f650a0f7379\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.244901 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e17e004d-fb45-4c4f-896f-6f650a0f7379-registry-certificates\") pod \"e17e004d-fb45-4c4f-896f-6f650a0f7379\" (UID: \"e17e004d-fb45-4c4f-896f-6f650a0f7379\") " Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.246377 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e17e004d-fb45-4c4f-896f-6f650a0f7379-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e17e004d-fb45-4c4f-896f-6f650a0f7379" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.247856 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e17e004d-fb45-4c4f-896f-6f650a0f7379-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e17e004d-fb45-4c4f-896f-6f650a0f7379" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.255372 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17e004d-fb45-4c4f-896f-6f650a0f7379-kube-api-access-wwq8k" (OuterVolumeSpecName: "kube-api-access-wwq8k") pod "e17e004d-fb45-4c4f-896f-6f650a0f7379" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379"). InnerVolumeSpecName "kube-api-access-wwq8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.255767 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17e004d-fb45-4c4f-896f-6f650a0f7379-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e17e004d-fb45-4c4f-896f-6f650a0f7379" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.257103 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17e004d-fb45-4c4f-896f-6f650a0f7379-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e17e004d-fb45-4c4f-896f-6f650a0f7379" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.264899 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17e004d-fb45-4c4f-896f-6f650a0f7379-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e17e004d-fb45-4c4f-896f-6f650a0f7379" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.272803 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e17e004d-fb45-4c4f-896f-6f650a0f7379-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e17e004d-fb45-4c4f-896f-6f650a0f7379" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.286283 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "e17e004d-fb45-4c4f-896f-6f650a0f7379" (UID: "e17e004d-fb45-4c4f-896f-6f650a0f7379"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.346718 4844 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e17e004d-fb45-4c4f-896f-6f650a0f7379-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.346751 4844 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e17e004d-fb45-4c4f-896f-6f650a0f7379-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.346760 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwq8k\" (UniqueName: \"kubernetes.io/projected/e17e004d-fb45-4c4f-896f-6f650a0f7379-kube-api-access-wwq8k\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.346770 4844 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e17e004d-fb45-4c4f-896f-6f650a0f7379-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.346780 4844 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e17e004d-fb45-4c4f-896f-6f650a0f7379-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.346788 4844 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e17e004d-fb45-4c4f-896f-6f650a0f7379-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.346795 4844 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e17e004d-fb45-4c4f-896f-6f650a0f7379-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.938130 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" event={"ID":"e17e004d-fb45-4c4f-896f-6f650a0f7379","Type":"ContainerDied","Data":"d55b4cef7e498e926d6cda39a59add51c0022e3a128d03e6436baf21399b85e2"} Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.938182 4844 scope.go:117] "RemoveContainer" containerID="fde35df5fc2ed9d745bb4b922f6db22da8295b8ae1cab805f9aaa3d69cba6f1a" Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.938282 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dwwm9" Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.961709 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dwwm9"] Jan 26 12:58:49 crc kubenswrapper[4844]: I0126 12:58:49.969424 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dwwm9"] Jan 26 12:58:51 crc kubenswrapper[4844]: I0126 12:58:51.322048 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e17e004d-fb45-4c4f-896f-6f650a0f7379" path="/var/lib/kubelet/pods/e17e004d-fb45-4c4f-896f-6f650a0f7379/volumes" Jan 26 12:59:06 crc kubenswrapper[4844]: I0126 12:59:06.367132 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:59:06 crc kubenswrapper[4844]: I0126 12:59:06.368101 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:59:35 crc kubenswrapper[4844]: I0126 12:59:35.327924 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-565d46959-h92rb"] Jan 26 12:59:35 crc kubenswrapper[4844]: I0126 12:59:35.328659 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-565d46959-h92rb" podUID="67cef31a-df5a-4bb2-bcce-36643e5f1151" containerName="controller-manager" containerID="cri-o://fb867795bbc5fa34f18f2532f8205853680309c01f2ff2ed87d4642558d8095a" gracePeriod=30 Jan 26 12:59:35 crc kubenswrapper[4844]: I0126 12:59:35.429337 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2"] Jan 26 12:59:35 crc kubenswrapper[4844]: I0126 12:59:35.429563 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" podUID="68859ffd-a8de-45f0-90f2-642f33717a87" containerName="route-controller-manager" containerID="cri-o://721a29ed159e88ceae2f1201f5e4fd032e60bb85b32c7cb3fcffa559c515fe94" gracePeriod=30 Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.252374 4844 generic.go:334] "Generic (PLEG): container finished" podID="68859ffd-a8de-45f0-90f2-642f33717a87" containerID="721a29ed159e88ceae2f1201f5e4fd032e60bb85b32c7cb3fcffa559c515fe94" exitCode=0 Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.252450 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" event={"ID":"68859ffd-a8de-45f0-90f2-642f33717a87","Type":"ContainerDied","Data":"721a29ed159e88ceae2f1201f5e4fd032e60bb85b32c7cb3fcffa559c515fe94"} Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.254019 4844 generic.go:334] "Generic (PLEG): container finished" podID="67cef31a-df5a-4bb2-bcce-36643e5f1151" containerID="fb867795bbc5fa34f18f2532f8205853680309c01f2ff2ed87d4642558d8095a" exitCode=0 Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.254052 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-565d46959-h92rb" event={"ID":"67cef31a-df5a-4bb2-bcce-36643e5f1151","Type":"ContainerDied","Data":"fb867795bbc5fa34f18f2532f8205853680309c01f2ff2ed87d4642558d8095a"} Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.366659 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.367195 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.391264 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.414971 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67cef31a-df5a-4bb2-bcce-36643e5f1151-client-ca\") pod \"67cef31a-df5a-4bb2-bcce-36643e5f1151\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.415260 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67cef31a-df5a-4bb2-bcce-36643e5f1151-proxy-ca-bundles\") pod \"67cef31a-df5a-4bb2-bcce-36643e5f1151\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.415345 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgwvb\" (UniqueName: \"kubernetes.io/projected/67cef31a-df5a-4bb2-bcce-36643e5f1151-kube-api-access-zgwvb\") pod \"67cef31a-df5a-4bb2-bcce-36643e5f1151\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.415507 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67cef31a-df5a-4bb2-bcce-36643e5f1151-config\") pod \"67cef31a-df5a-4bb2-bcce-36643e5f1151\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.415554 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67cef31a-df5a-4bb2-bcce-36643e5f1151-serving-cert\") pod \"67cef31a-df5a-4bb2-bcce-36643e5f1151\" (UID: \"67cef31a-df5a-4bb2-bcce-36643e5f1151\") " Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.416958 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67cef31a-df5a-4bb2-bcce-36643e5f1151-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "67cef31a-df5a-4bb2-bcce-36643e5f1151" (UID: "67cef31a-df5a-4bb2-bcce-36643e5f1151"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.417472 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67cef31a-df5a-4bb2-bcce-36643e5f1151-client-ca" (OuterVolumeSpecName: "client-ca") pod "67cef31a-df5a-4bb2-bcce-36643e5f1151" (UID: "67cef31a-df5a-4bb2-bcce-36643e5f1151"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.418305 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67cef31a-df5a-4bb2-bcce-36643e5f1151-config" (OuterVolumeSpecName: "config") pod "67cef31a-df5a-4bb2-bcce-36643e5f1151" (UID: "67cef31a-df5a-4bb2-bcce-36643e5f1151"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.422098 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67cef31a-df5a-4bb2-bcce-36643e5f1151-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "67cef31a-df5a-4bb2-bcce-36643e5f1151" (UID: "67cef31a-df5a-4bb2-bcce-36643e5f1151"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.422998 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.424235 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67cef31a-df5a-4bb2-bcce-36643e5f1151-kube-api-access-zgwvb" (OuterVolumeSpecName: "kube-api-access-zgwvb") pod "67cef31a-df5a-4bb2-bcce-36643e5f1151" (UID: "67cef31a-df5a-4bb2-bcce-36643e5f1151"). InnerVolumeSpecName "kube-api-access-zgwvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.516902 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68859ffd-a8de-45f0-90f2-642f33717a87-client-ca\") pod \"68859ffd-a8de-45f0-90f2-642f33717a87\" (UID: \"68859ffd-a8de-45f0-90f2-642f33717a87\") " Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.517064 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68859ffd-a8de-45f0-90f2-642f33717a87-serving-cert\") pod \"68859ffd-a8de-45f0-90f2-642f33717a87\" (UID: \"68859ffd-a8de-45f0-90f2-642f33717a87\") " Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.517101 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2tzx\" (UniqueName: \"kubernetes.io/projected/68859ffd-a8de-45f0-90f2-642f33717a87-kube-api-access-r2tzx\") pod \"68859ffd-a8de-45f0-90f2-642f33717a87\" (UID: \"68859ffd-a8de-45f0-90f2-642f33717a87\") " Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.517236 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68859ffd-a8de-45f0-90f2-642f33717a87-config\") pod \"68859ffd-a8de-45f0-90f2-642f33717a87\" (UID: \"68859ffd-a8de-45f0-90f2-642f33717a87\") " Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.517723 4844 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67cef31a-df5a-4bb2-bcce-36643e5f1151-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.517745 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgwvb\" (UniqueName: \"kubernetes.io/projected/67cef31a-df5a-4bb2-bcce-36643e5f1151-kube-api-access-zgwvb\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.517900 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67cef31a-df5a-4bb2-bcce-36643e5f1151-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.517923 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67cef31a-df5a-4bb2-bcce-36643e5f1151-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.518102 4844 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67cef31a-df5a-4bb2-bcce-36643e5f1151-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.518769 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68859ffd-a8de-45f0-90f2-642f33717a87-client-ca" (OuterVolumeSpecName: "client-ca") pod "68859ffd-a8de-45f0-90f2-642f33717a87" (UID: "68859ffd-a8de-45f0-90f2-642f33717a87"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.518849 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68859ffd-a8de-45f0-90f2-642f33717a87-config" (OuterVolumeSpecName: "config") pod "68859ffd-a8de-45f0-90f2-642f33717a87" (UID: "68859ffd-a8de-45f0-90f2-642f33717a87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.521251 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68859ffd-a8de-45f0-90f2-642f33717a87-kube-api-access-r2tzx" (OuterVolumeSpecName: "kube-api-access-r2tzx") pod "68859ffd-a8de-45f0-90f2-642f33717a87" (UID: "68859ffd-a8de-45f0-90f2-642f33717a87"). InnerVolumeSpecName "kube-api-access-r2tzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.522200 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68859ffd-a8de-45f0-90f2-642f33717a87-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "68859ffd-a8de-45f0-90f2-642f33717a87" (UID: "68859ffd-a8de-45f0-90f2-642f33717a87"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.619978 4844 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68859ffd-a8de-45f0-90f2-642f33717a87-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.620047 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2tzx\" (UniqueName: \"kubernetes.io/projected/68859ffd-a8de-45f0-90f2-642f33717a87-kube-api-access-r2tzx\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.620078 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68859ffd-a8de-45f0-90f2-642f33717a87-config\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:36 crc kubenswrapper[4844]: I0126 12:59:36.620097 4844 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68859ffd-a8de-45f0-90f2-642f33717a87-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.155756 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf"] Jan 26 12:59:37 crc kubenswrapper[4844]: E0126 12:59:37.156086 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd915f15-b87f-471f-94b7-cdeba3701dc6" containerName="extract-utilities" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156115 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd915f15-b87f-471f-94b7-cdeba3701dc6" containerName="extract-utilities" Jan 26 12:59:37 crc kubenswrapper[4844]: E0126 12:59:37.156139 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67cef31a-df5a-4bb2-bcce-36643e5f1151" containerName="controller-manager" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156153 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="67cef31a-df5a-4bb2-bcce-36643e5f1151" containerName="controller-manager" Jan 26 12:59:37 crc kubenswrapper[4844]: E0126 12:59:37.156172 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bfa1475-8d86-4a0c-9864-79488c8832ab" containerName="registry-server" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156186 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfa1475-8d86-4a0c-9864-79488c8832ab" containerName="registry-server" Jan 26 12:59:37 crc kubenswrapper[4844]: E0126 12:59:37.156203 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bfa1475-8d86-4a0c-9864-79488c8832ab" containerName="extract-content" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156215 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfa1475-8d86-4a0c-9864-79488c8832ab" containerName="extract-content" Jan 26 12:59:37 crc kubenswrapper[4844]: E0126 12:59:37.156237 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17e004d-fb45-4c4f-896f-6f650a0f7379" containerName="registry" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156250 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17e004d-fb45-4c4f-896f-6f650a0f7379" containerName="registry" Jan 26 12:59:37 crc kubenswrapper[4844]: E0126 12:59:37.156271 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740f9914-7e12-4cdc-b61f-4ce2f43a5e8d" containerName="registry-server" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156285 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="740f9914-7e12-4cdc-b61f-4ce2f43a5e8d" containerName="registry-server" Jan 26 12:59:37 crc kubenswrapper[4844]: E0126 12:59:37.156301 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740f9914-7e12-4cdc-b61f-4ce2f43a5e8d" containerName="extract-content" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156313 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="740f9914-7e12-4cdc-b61f-4ce2f43a5e8d" containerName="extract-content" Jan 26 12:59:37 crc kubenswrapper[4844]: E0126 12:59:37.156328 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd915f15-b87f-471f-94b7-cdeba3701dc6" containerName="registry-server" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156341 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd915f15-b87f-471f-94b7-cdeba3701dc6" containerName="registry-server" Jan 26 12:59:37 crc kubenswrapper[4844]: E0126 12:59:37.156358 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e419ec9-0814-4199-ae59-f47408ec961d" containerName="extract-utilities" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156371 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e419ec9-0814-4199-ae59-f47408ec961d" containerName="extract-utilities" Jan 26 12:59:37 crc kubenswrapper[4844]: E0126 12:59:37.156387 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740f9914-7e12-4cdc-b61f-4ce2f43a5e8d" containerName="extract-utilities" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156399 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="740f9914-7e12-4cdc-b61f-4ce2f43a5e8d" containerName="extract-utilities" Jan 26 12:59:37 crc kubenswrapper[4844]: E0126 12:59:37.156414 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bfa1475-8d86-4a0c-9864-79488c8832ab" containerName="extract-utilities" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156426 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfa1475-8d86-4a0c-9864-79488c8832ab" containerName="extract-utilities" Jan 26 12:59:37 crc kubenswrapper[4844]: E0126 12:59:37.156446 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd915f15-b87f-471f-94b7-cdeba3701dc6" containerName="extract-content" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156457 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd915f15-b87f-471f-94b7-cdeba3701dc6" containerName="extract-content" Jan 26 12:59:37 crc kubenswrapper[4844]: E0126 12:59:37.156472 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68859ffd-a8de-45f0-90f2-642f33717a87" containerName="route-controller-manager" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156484 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="68859ffd-a8de-45f0-90f2-642f33717a87" containerName="route-controller-manager" Jan 26 12:59:37 crc kubenswrapper[4844]: E0126 12:59:37.156502 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e419ec9-0814-4199-ae59-f47408ec961d" containerName="extract-content" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156514 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e419ec9-0814-4199-ae59-f47408ec961d" containerName="extract-content" Jan 26 12:59:37 crc kubenswrapper[4844]: E0126 12:59:37.156536 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e419ec9-0814-4199-ae59-f47408ec961d" containerName="registry-server" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156548 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e419ec9-0814-4199-ae59-f47408ec961d" containerName="registry-server" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156732 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e419ec9-0814-4199-ae59-f47408ec961d" containerName="registry-server" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156761 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bfa1475-8d86-4a0c-9864-79488c8832ab" containerName="registry-server" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156779 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17e004d-fb45-4c4f-896f-6f650a0f7379" containerName="registry" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156796 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="67cef31a-df5a-4bb2-bcce-36643e5f1151" containerName="controller-manager" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156816 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="740f9914-7e12-4cdc-b61f-4ce2f43a5e8d" containerName="registry-server" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156867 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd915f15-b87f-471f-94b7-cdeba3701dc6" containerName="registry-server" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.156889 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="68859ffd-a8de-45f0-90f2-642f33717a87" containerName="route-controller-manager" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.157435 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.162929 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp"] Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.163971 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.176122 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp"] Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.217541 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf"] Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.228093 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11bf7277-dcb1-4d7e-8b0b-c63a975bff0a-config\") pod \"route-controller-manager-6bd4b6c77-nzfsf\" (UID: \"11bf7277-dcb1-4d7e-8b0b-c63a975bff0a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.228161 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4v7f\" (UniqueName: \"kubernetes.io/projected/b9afd7ce-3404-4c99-8a29-dd255d5f7de7-kube-api-access-t4v7f\") pod \"controller-manager-77dcdb6c6d-pjzwp\" (UID: \"b9afd7ce-3404-4c99-8a29-dd255d5f7de7\") " pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.228216 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzdjw\" (UniqueName: \"kubernetes.io/projected/11bf7277-dcb1-4d7e-8b0b-c63a975bff0a-kube-api-access-kzdjw\") pod \"route-controller-manager-6bd4b6c77-nzfsf\" (UID: \"11bf7277-dcb1-4d7e-8b0b-c63a975bff0a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.228270 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9afd7ce-3404-4c99-8a29-dd255d5f7de7-proxy-ca-bundles\") pod \"controller-manager-77dcdb6c6d-pjzwp\" (UID: \"b9afd7ce-3404-4c99-8a29-dd255d5f7de7\") " pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.228309 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11bf7277-dcb1-4d7e-8b0b-c63a975bff0a-client-ca\") pod \"route-controller-manager-6bd4b6c77-nzfsf\" (UID: \"11bf7277-dcb1-4d7e-8b0b-c63a975bff0a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.228341 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11bf7277-dcb1-4d7e-8b0b-c63a975bff0a-serving-cert\") pod \"route-controller-manager-6bd4b6c77-nzfsf\" (UID: \"11bf7277-dcb1-4d7e-8b0b-c63a975bff0a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.228387 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9afd7ce-3404-4c99-8a29-dd255d5f7de7-client-ca\") pod \"controller-manager-77dcdb6c6d-pjzwp\" (UID: \"b9afd7ce-3404-4c99-8a29-dd255d5f7de7\") " pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.228431 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9afd7ce-3404-4c99-8a29-dd255d5f7de7-config\") pod \"controller-manager-77dcdb6c6d-pjzwp\" (UID: \"b9afd7ce-3404-4c99-8a29-dd255d5f7de7\") " pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.228482 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9afd7ce-3404-4c99-8a29-dd255d5f7de7-serving-cert\") pod \"controller-manager-77dcdb6c6d-pjzwp\" (UID: \"b9afd7ce-3404-4c99-8a29-dd255d5f7de7\") " pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.265336 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-565d46959-h92rb" event={"ID":"67cef31a-df5a-4bb2-bcce-36643e5f1151","Type":"ContainerDied","Data":"f71768a81fa3d4f359173aa2b56dc7a1dca0ba6a25c01b382c8952a5ef1cb3fd"} Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.265392 4844 scope.go:117] "RemoveContainer" containerID="fb867795bbc5fa34f18f2532f8205853680309c01f2ff2ed87d4642558d8095a" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.265593 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-565d46959-h92rb" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.268305 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" event={"ID":"68859ffd-a8de-45f0-90f2-642f33717a87","Type":"ContainerDied","Data":"fa4e8b0868da4d7e768a61893217c26bec8ffba9fe9e3338d4edde893e6bb4fd"} Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.273091 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.305159 4844 scope.go:117] "RemoveContainer" containerID="721a29ed159e88ceae2f1201f5e4fd032e60bb85b32c7cb3fcffa559c515fe94" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.310633 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-565d46959-h92rb"] Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.327228 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-565d46959-h92rb"] Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.327313 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2"] Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.329833 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11bf7277-dcb1-4d7e-8b0b-c63a975bff0a-client-ca\") pod \"route-controller-manager-6bd4b6c77-nzfsf\" (UID: \"11bf7277-dcb1-4d7e-8b0b-c63a975bff0a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.329889 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11bf7277-dcb1-4d7e-8b0b-c63a975bff0a-serving-cert\") pod \"route-controller-manager-6bd4b6c77-nzfsf\" (UID: \"11bf7277-dcb1-4d7e-8b0b-c63a975bff0a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.329942 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9afd7ce-3404-4c99-8a29-dd255d5f7de7-client-ca\") pod \"controller-manager-77dcdb6c6d-pjzwp\" (UID: \"b9afd7ce-3404-4c99-8a29-dd255d5f7de7\") " pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.329987 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9afd7ce-3404-4c99-8a29-dd255d5f7de7-config\") pod \"controller-manager-77dcdb6c6d-pjzwp\" (UID: \"b9afd7ce-3404-4c99-8a29-dd255d5f7de7\") " pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.330040 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9afd7ce-3404-4c99-8a29-dd255d5f7de7-serving-cert\") pod \"controller-manager-77dcdb6c6d-pjzwp\" (UID: \"b9afd7ce-3404-4c99-8a29-dd255d5f7de7\") " pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.330104 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11bf7277-dcb1-4d7e-8b0b-c63a975bff0a-config\") pod \"route-controller-manager-6bd4b6c77-nzfsf\" (UID: \"11bf7277-dcb1-4d7e-8b0b-c63a975bff0a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.330139 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4v7f\" (UniqueName: \"kubernetes.io/projected/b9afd7ce-3404-4c99-8a29-dd255d5f7de7-kube-api-access-t4v7f\") pod \"controller-manager-77dcdb6c6d-pjzwp\" (UID: \"b9afd7ce-3404-4c99-8a29-dd255d5f7de7\") " pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.330191 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzdjw\" (UniqueName: \"kubernetes.io/projected/11bf7277-dcb1-4d7e-8b0b-c63a975bff0a-kube-api-access-kzdjw\") pod \"route-controller-manager-6bd4b6c77-nzfsf\" (UID: \"11bf7277-dcb1-4d7e-8b0b-c63a975bff0a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.330248 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9afd7ce-3404-4c99-8a29-dd255d5f7de7-proxy-ca-bundles\") pod \"controller-manager-77dcdb6c6d-pjzwp\" (UID: \"b9afd7ce-3404-4c99-8a29-dd255d5f7de7\") " pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.331223 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11bf7277-dcb1-4d7e-8b0b-c63a975bff0a-client-ca\") pod \"route-controller-manager-6bd4b6c77-nzfsf\" (UID: \"11bf7277-dcb1-4d7e-8b0b-c63a975bff0a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.331900 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9afd7ce-3404-4c99-8a29-dd255d5f7de7-client-ca\") pod \"controller-manager-77dcdb6c6d-pjzwp\" (UID: \"b9afd7ce-3404-4c99-8a29-dd255d5f7de7\") " pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.331964 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9afd7ce-3404-4c99-8a29-dd255d5f7de7-proxy-ca-bundles\") pod \"controller-manager-77dcdb6c6d-pjzwp\" (UID: \"b9afd7ce-3404-4c99-8a29-dd255d5f7de7\") " pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.332397 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11bf7277-dcb1-4d7e-8b0b-c63a975bff0a-config\") pod \"route-controller-manager-6bd4b6c77-nzfsf\" (UID: \"11bf7277-dcb1-4d7e-8b0b-c63a975bff0a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.332487 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85b99c9b7d-5f5m2"] Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.333816 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9afd7ce-3404-4c99-8a29-dd255d5f7de7-config\") pod \"controller-manager-77dcdb6c6d-pjzwp\" (UID: \"b9afd7ce-3404-4c99-8a29-dd255d5f7de7\") " pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.340479 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11bf7277-dcb1-4d7e-8b0b-c63a975bff0a-serving-cert\") pod \"route-controller-manager-6bd4b6c77-nzfsf\" (UID: \"11bf7277-dcb1-4d7e-8b0b-c63a975bff0a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.340503 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9afd7ce-3404-4c99-8a29-dd255d5f7de7-serving-cert\") pod \"controller-manager-77dcdb6c6d-pjzwp\" (UID: \"b9afd7ce-3404-4c99-8a29-dd255d5f7de7\") " pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.349555 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4v7f\" (UniqueName: \"kubernetes.io/projected/b9afd7ce-3404-4c99-8a29-dd255d5f7de7-kube-api-access-t4v7f\") pod \"controller-manager-77dcdb6c6d-pjzwp\" (UID: \"b9afd7ce-3404-4c99-8a29-dd255d5f7de7\") " pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.352359 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzdjw\" (UniqueName: \"kubernetes.io/projected/11bf7277-dcb1-4d7e-8b0b-c63a975bff0a-kube-api-access-kzdjw\") pod \"route-controller-manager-6bd4b6c77-nzfsf\" (UID: \"11bf7277-dcb1-4d7e-8b0b-c63a975bff0a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.501032 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.513349 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.810668 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp"] Jan 26 12:59:37 crc kubenswrapper[4844]: W0126 12:59:37.819069 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9afd7ce_3404_4c99_8a29_dd255d5f7de7.slice/crio-52701a0510ecaff87e8a799632e1172220825c5bbcb8c6db37a66465857ada5a WatchSource:0}: Error finding container 52701a0510ecaff87e8a799632e1172220825c5bbcb8c6db37a66465857ada5a: Status 404 returned error can't find the container with id 52701a0510ecaff87e8a799632e1172220825c5bbcb8c6db37a66465857ada5a Jan 26 12:59:37 crc kubenswrapper[4844]: I0126 12:59:37.882859 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf"] Jan 26 12:59:37 crc kubenswrapper[4844]: W0126 12:59:37.890149 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11bf7277_dcb1_4d7e_8b0b_c63a975bff0a.slice/crio-ae8681cdd7eb87ec1c4594b1a6d48a0330bad5cb278a36b4c6783cda763f8fe7 WatchSource:0}: Error finding container ae8681cdd7eb87ec1c4594b1a6d48a0330bad5cb278a36b4c6783cda763f8fe7: Status 404 returned error can't find the container with id ae8681cdd7eb87ec1c4594b1a6d48a0330bad5cb278a36b4c6783cda763f8fe7 Jan 26 12:59:38 crc kubenswrapper[4844]: I0126 12:59:38.274805 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" event={"ID":"b9afd7ce-3404-4c99-8a29-dd255d5f7de7","Type":"ContainerStarted","Data":"0ecdf20e0dba5ee2184c346f705debe680ef34cb76bb9f24ba05a348c25df845"} Jan 26 12:59:38 crc kubenswrapper[4844]: I0126 12:59:38.274875 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" event={"ID":"b9afd7ce-3404-4c99-8a29-dd255d5f7de7","Type":"ContainerStarted","Data":"52701a0510ecaff87e8a799632e1172220825c5bbcb8c6db37a66465857ada5a"} Jan 26 12:59:38 crc kubenswrapper[4844]: I0126 12:59:38.274973 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:38 crc kubenswrapper[4844]: I0126 12:59:38.279580 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" event={"ID":"11bf7277-dcb1-4d7e-8b0b-c63a975bff0a","Type":"ContainerStarted","Data":"6189fdd7d9fe476dc79d985da8f7a598d93c2ddf427cb19567d1a6c194d0b794"} Jan 26 12:59:38 crc kubenswrapper[4844]: I0126 12:59:38.279624 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" event={"ID":"11bf7277-dcb1-4d7e-8b0b-c63a975bff0a","Type":"ContainerStarted","Data":"ae8681cdd7eb87ec1c4594b1a6d48a0330bad5cb278a36b4c6783cda763f8fe7"} Jan 26 12:59:38 crc kubenswrapper[4844]: I0126 12:59:38.279793 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" Jan 26 12:59:38 crc kubenswrapper[4844]: I0126 12:59:38.279855 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" Jan 26 12:59:38 crc kubenswrapper[4844]: I0126 12:59:38.299103 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77dcdb6c6d-pjzwp" podStartSLOduration=3.299078302 podStartE2EDuration="3.299078302s" podCreationTimestamp="2026-01-26 12:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:59:38.294068259 +0000 UTC m=+955.227435881" watchObservedRunningTime="2026-01-26 12:59:38.299078302 +0000 UTC m=+955.232445914" Jan 26 12:59:38 crc kubenswrapper[4844]: I0126 12:59:38.393274 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" Jan 26 12:59:38 crc kubenswrapper[4844]: I0126 12:59:38.414770 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bd4b6c77-nzfsf" podStartSLOduration=3.414739528 podStartE2EDuration="3.414739528s" podCreationTimestamp="2026-01-26 12:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:59:38.358218398 +0000 UTC m=+955.291586010" watchObservedRunningTime="2026-01-26 12:59:38.414739528 +0000 UTC m=+955.348107140" Jan 26 12:59:39 crc kubenswrapper[4844]: I0126 12:59:39.320568 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67cef31a-df5a-4bb2-bcce-36643e5f1151" path="/var/lib/kubelet/pods/67cef31a-df5a-4bb2-bcce-36643e5f1151/volumes" Jan 26 12:59:39 crc kubenswrapper[4844]: I0126 12:59:39.321638 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68859ffd-a8de-45f0-90f2-642f33717a87" path="/var/lib/kubelet/pods/68859ffd-a8de-45f0-90f2-642f33717a87/volumes" Jan 26 13:00:00 crc kubenswrapper[4844]: I0126 13:00:00.182761 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g"] Jan 26 13:00:00 crc kubenswrapper[4844]: I0126 13:00:00.183758 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g" Jan 26 13:00:00 crc kubenswrapper[4844]: I0126 13:00:00.185699 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 13:00:00 crc kubenswrapper[4844]: I0126 13:00:00.185944 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 13:00:00 crc kubenswrapper[4844]: I0126 13:00:00.199942 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g"] Jan 26 13:00:00 crc kubenswrapper[4844]: I0126 13:00:00.273758 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrctf\" (UniqueName: \"kubernetes.io/projected/632a6099-975b-4832-8c3a-d0dbd49c482f-kube-api-access-lrctf\") pod \"collect-profiles-29490540-qvh5g\" (UID: \"632a6099-975b-4832-8c3a-d0dbd49c482f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g" Jan 26 13:00:00 crc kubenswrapper[4844]: I0126 13:00:00.273818 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/632a6099-975b-4832-8c3a-d0dbd49c482f-config-volume\") pod \"collect-profiles-29490540-qvh5g\" (UID: \"632a6099-975b-4832-8c3a-d0dbd49c482f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g" Jan 26 13:00:00 crc kubenswrapper[4844]: I0126 13:00:00.273963 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/632a6099-975b-4832-8c3a-d0dbd49c482f-secret-volume\") pod \"collect-profiles-29490540-qvh5g\" (UID: \"632a6099-975b-4832-8c3a-d0dbd49c482f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g" Jan 26 13:00:00 crc kubenswrapper[4844]: I0126 13:00:00.375254 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/632a6099-975b-4832-8c3a-d0dbd49c482f-secret-volume\") pod \"collect-profiles-29490540-qvh5g\" (UID: \"632a6099-975b-4832-8c3a-d0dbd49c482f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g" Jan 26 13:00:00 crc kubenswrapper[4844]: I0126 13:00:00.375416 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrctf\" (UniqueName: \"kubernetes.io/projected/632a6099-975b-4832-8c3a-d0dbd49c482f-kube-api-access-lrctf\") pod \"collect-profiles-29490540-qvh5g\" (UID: \"632a6099-975b-4832-8c3a-d0dbd49c482f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g" Jan 26 13:00:00 crc kubenswrapper[4844]: I0126 13:00:00.375468 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/632a6099-975b-4832-8c3a-d0dbd49c482f-config-volume\") pod \"collect-profiles-29490540-qvh5g\" (UID: \"632a6099-975b-4832-8c3a-d0dbd49c482f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g" Jan 26 13:00:00 crc kubenswrapper[4844]: I0126 13:00:00.377186 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/632a6099-975b-4832-8c3a-d0dbd49c482f-config-volume\") pod \"collect-profiles-29490540-qvh5g\" (UID: \"632a6099-975b-4832-8c3a-d0dbd49c482f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g" Jan 26 13:00:00 crc kubenswrapper[4844]: I0126 13:00:00.384217 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/632a6099-975b-4832-8c3a-d0dbd49c482f-secret-volume\") pod \"collect-profiles-29490540-qvh5g\" (UID: \"632a6099-975b-4832-8c3a-d0dbd49c482f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g" Jan 26 13:00:00 crc kubenswrapper[4844]: I0126 13:00:00.406189 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrctf\" (UniqueName: \"kubernetes.io/projected/632a6099-975b-4832-8c3a-d0dbd49c482f-kube-api-access-lrctf\") pod \"collect-profiles-29490540-qvh5g\" (UID: \"632a6099-975b-4832-8c3a-d0dbd49c482f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g" Jan 26 13:00:00 crc kubenswrapper[4844]: I0126 13:00:00.508163 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g" Jan 26 13:00:00 crc kubenswrapper[4844]: I0126 13:00:00.955554 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g"] Jan 26 13:00:01 crc kubenswrapper[4844]: I0126 13:00:01.440934 4844 generic.go:334] "Generic (PLEG): container finished" podID="632a6099-975b-4832-8c3a-d0dbd49c482f" containerID="8d2ec9a1ea23de88c7bb56a717a32f52d3430ea03c06d1e640422b042f5e7dcb" exitCode=0 Jan 26 13:00:01 crc kubenswrapper[4844]: I0126 13:00:01.441051 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g" event={"ID":"632a6099-975b-4832-8c3a-d0dbd49c482f","Type":"ContainerDied","Data":"8d2ec9a1ea23de88c7bb56a717a32f52d3430ea03c06d1e640422b042f5e7dcb"} Jan 26 13:00:01 crc kubenswrapper[4844]: I0126 13:00:01.441565 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g" event={"ID":"632a6099-975b-4832-8c3a-d0dbd49c482f","Type":"ContainerStarted","Data":"1aa525184899cc06d11bdefbccea39f955248ebc736aa4df25b3472184bd567a"} Jan 26 13:00:02 crc kubenswrapper[4844]: I0126 13:00:02.806966 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g" Jan 26 13:00:02 crc kubenswrapper[4844]: I0126 13:00:02.917630 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/632a6099-975b-4832-8c3a-d0dbd49c482f-secret-volume\") pod \"632a6099-975b-4832-8c3a-d0dbd49c482f\" (UID: \"632a6099-975b-4832-8c3a-d0dbd49c482f\") " Jan 26 13:00:02 crc kubenswrapper[4844]: I0126 13:00:02.917879 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrctf\" (UniqueName: \"kubernetes.io/projected/632a6099-975b-4832-8c3a-d0dbd49c482f-kube-api-access-lrctf\") pod \"632a6099-975b-4832-8c3a-d0dbd49c482f\" (UID: \"632a6099-975b-4832-8c3a-d0dbd49c482f\") " Jan 26 13:00:02 crc kubenswrapper[4844]: I0126 13:00:02.917944 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/632a6099-975b-4832-8c3a-d0dbd49c482f-config-volume\") pod \"632a6099-975b-4832-8c3a-d0dbd49c482f\" (UID: \"632a6099-975b-4832-8c3a-d0dbd49c482f\") " Jan 26 13:00:02 crc kubenswrapper[4844]: I0126 13:00:02.919490 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/632a6099-975b-4832-8c3a-d0dbd49c482f-config-volume" (OuterVolumeSpecName: "config-volume") pod "632a6099-975b-4832-8c3a-d0dbd49c482f" (UID: "632a6099-975b-4832-8c3a-d0dbd49c482f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:00:02 crc kubenswrapper[4844]: I0126 13:00:02.927880 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632a6099-975b-4832-8c3a-d0dbd49c482f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "632a6099-975b-4832-8c3a-d0dbd49c482f" (UID: "632a6099-975b-4832-8c3a-d0dbd49c482f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:00:02 crc kubenswrapper[4844]: I0126 13:00:02.927874 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/632a6099-975b-4832-8c3a-d0dbd49c482f-kube-api-access-lrctf" (OuterVolumeSpecName: "kube-api-access-lrctf") pod "632a6099-975b-4832-8c3a-d0dbd49c482f" (UID: "632a6099-975b-4832-8c3a-d0dbd49c482f"). InnerVolumeSpecName "kube-api-access-lrctf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:00:03 crc kubenswrapper[4844]: I0126 13:00:03.020502 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrctf\" (UniqueName: \"kubernetes.io/projected/632a6099-975b-4832-8c3a-d0dbd49c482f-kube-api-access-lrctf\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:03 crc kubenswrapper[4844]: I0126 13:00:03.020561 4844 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/632a6099-975b-4832-8c3a-d0dbd49c482f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:03 crc kubenswrapper[4844]: I0126 13:00:03.020582 4844 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/632a6099-975b-4832-8c3a-d0dbd49c482f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 13:00:03 crc kubenswrapper[4844]: I0126 13:00:03.456285 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g" event={"ID":"632a6099-975b-4832-8c3a-d0dbd49c482f","Type":"ContainerDied","Data":"1aa525184899cc06d11bdefbccea39f955248ebc736aa4df25b3472184bd567a"} Jan 26 13:00:03 crc kubenswrapper[4844]: I0126 13:00:03.456343 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g" Jan 26 13:00:03 crc kubenswrapper[4844]: I0126 13:00:03.456346 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aa525184899cc06d11bdefbccea39f955248ebc736aa4df25b3472184bd567a" Jan 26 13:00:06 crc kubenswrapper[4844]: I0126 13:00:06.366391 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:00:06 crc kubenswrapper[4844]: I0126 13:00:06.367092 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:00:06 crc kubenswrapper[4844]: I0126 13:00:06.367275 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 13:00:06 crc kubenswrapper[4844]: I0126 13:00:06.368267 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"168fb0438abc387a38960b9c5a893cdb9d7d45ce1d189f5af498314adae7a5ca"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:00:06 crc kubenswrapper[4844]: I0126 13:00:06.368353 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://168fb0438abc387a38960b9c5a893cdb9d7d45ce1d189f5af498314adae7a5ca" gracePeriod=600 Jan 26 13:00:07 crc kubenswrapper[4844]: I0126 13:00:07.489091 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="168fb0438abc387a38960b9c5a893cdb9d7d45ce1d189f5af498314adae7a5ca" exitCode=0 Jan 26 13:00:07 crc kubenswrapper[4844]: I0126 13:00:07.489183 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"168fb0438abc387a38960b9c5a893cdb9d7d45ce1d189f5af498314adae7a5ca"} Jan 26 13:00:07 crc kubenswrapper[4844]: I0126 13:00:07.489481 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"075cf6577d89e9c42e09a4d1ad1513c85b800e67006e4d777b6952526577529a"} Jan 26 13:00:07 crc kubenswrapper[4844]: I0126 13:00:07.489505 4844 scope.go:117] "RemoveContainer" containerID="6036e032f544da01ca860cf2f64b83a1de4c715f98d7954c6a55f13c7ae044df" Jan 26 13:02:36 crc kubenswrapper[4844]: I0126 13:02:36.365127 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:02:36 crc kubenswrapper[4844]: I0126 13:02:36.365728 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:03:06 crc kubenswrapper[4844]: I0126 13:03:06.365143 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:03:06 crc kubenswrapper[4844]: I0126 13:03:06.365843 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:03:36 crc kubenswrapper[4844]: I0126 13:03:36.364760 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:03:36 crc kubenswrapper[4844]: I0126 13:03:36.365425 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:03:36 crc kubenswrapper[4844]: I0126 13:03:36.365490 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 13:03:36 crc kubenswrapper[4844]: I0126 13:03:36.366363 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"075cf6577d89e9c42e09a4d1ad1513c85b800e67006e4d777b6952526577529a"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:03:36 crc kubenswrapper[4844]: I0126 13:03:36.366457 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://075cf6577d89e9c42e09a4d1ad1513c85b800e67006e4d777b6952526577529a" gracePeriod=600 Jan 26 13:03:36 crc kubenswrapper[4844]: I0126 13:03:36.848069 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="075cf6577d89e9c42e09a4d1ad1513c85b800e67006e4d777b6952526577529a" exitCode=0 Jan 26 13:03:36 crc kubenswrapper[4844]: I0126 13:03:36.848160 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"075cf6577d89e9c42e09a4d1ad1513c85b800e67006e4d777b6952526577529a"} Jan 26 13:03:36 crc kubenswrapper[4844]: I0126 13:03:36.848443 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"40240f8fe0586673c0ae6562509c601ebaa4bbf7529663c2ad7f19e2cc1a7109"} Jan 26 13:03:36 crc kubenswrapper[4844]: I0126 13:03:36.848481 4844 scope.go:117] "RemoveContainer" containerID="168fb0438abc387a38960b9c5a893cdb9d7d45ce1d189f5af498314adae7a5ca" Jan 26 13:05:36 crc kubenswrapper[4844]: I0126 13:05:36.364904 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:05:36 crc kubenswrapper[4844]: I0126 13:05:36.365653 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:06:06 crc kubenswrapper[4844]: I0126 13:06:06.364695 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:06:06 crc kubenswrapper[4844]: I0126 13:06:06.365288 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:06:36 crc kubenswrapper[4844]: I0126 13:06:36.365315 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:06:36 crc kubenswrapper[4844]: I0126 13:06:36.365912 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:06:36 crc kubenswrapper[4844]: I0126 13:06:36.365969 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 13:06:36 crc kubenswrapper[4844]: I0126 13:06:36.366672 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40240f8fe0586673c0ae6562509c601ebaa4bbf7529663c2ad7f19e2cc1a7109"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:06:36 crc kubenswrapper[4844]: I0126 13:06:36.366766 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://40240f8fe0586673c0ae6562509c601ebaa4bbf7529663c2ad7f19e2cc1a7109" gracePeriod=600 Jan 26 13:06:37 crc kubenswrapper[4844]: I0126 13:06:37.112819 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="40240f8fe0586673c0ae6562509c601ebaa4bbf7529663c2ad7f19e2cc1a7109" exitCode=0 Jan 26 13:06:37 crc kubenswrapper[4844]: I0126 13:06:37.113101 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"40240f8fe0586673c0ae6562509c601ebaa4bbf7529663c2ad7f19e2cc1a7109"} Jan 26 13:06:37 crc kubenswrapper[4844]: I0126 13:06:37.113272 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e"} Jan 26 13:06:37 crc kubenswrapper[4844]: I0126 13:06:37.113307 4844 scope.go:117] "RemoveContainer" containerID="075cf6577d89e9c42e09a4d1ad1513c85b800e67006e4d777b6952526577529a" Jan 26 13:08:36 crc kubenswrapper[4844]: I0126 13:08:36.365440 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:08:36 crc kubenswrapper[4844]: I0126 13:08:36.366012 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:09:06 crc kubenswrapper[4844]: I0126 13:09:06.364799 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:09:06 crc kubenswrapper[4844]: I0126 13:09:06.365388 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:09:12 crc kubenswrapper[4844]: I0126 13:09:12.088078 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qmcfq"] Jan 26 13:09:12 crc kubenswrapper[4844]: E0126 13:09:12.088666 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632a6099-975b-4832-8c3a-d0dbd49c482f" containerName="collect-profiles" Jan 26 13:09:12 crc kubenswrapper[4844]: I0126 13:09:12.088681 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="632a6099-975b-4832-8c3a-d0dbd49c482f" containerName="collect-profiles" Jan 26 13:09:12 crc kubenswrapper[4844]: I0126 13:09:12.088800 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="632a6099-975b-4832-8c3a-d0dbd49c482f" containerName="collect-profiles" Jan 26 13:09:12 crc kubenswrapper[4844]: I0126 13:09:12.089642 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmcfq" Jan 26 13:09:12 crc kubenswrapper[4844]: I0126 13:09:12.102750 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmcfq"] Jan 26 13:09:12 crc kubenswrapper[4844]: I0126 13:09:12.195196 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d7lf\" (UniqueName: \"kubernetes.io/projected/a6278c8a-eb85-4602-a53e-4bf48b69696f-kube-api-access-7d7lf\") pod \"redhat-marketplace-qmcfq\" (UID: \"a6278c8a-eb85-4602-a53e-4bf48b69696f\") " pod="openshift-marketplace/redhat-marketplace-qmcfq" Jan 26 13:09:12 crc kubenswrapper[4844]: I0126 13:09:12.195254 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6278c8a-eb85-4602-a53e-4bf48b69696f-utilities\") pod \"redhat-marketplace-qmcfq\" (UID: \"a6278c8a-eb85-4602-a53e-4bf48b69696f\") " pod="openshift-marketplace/redhat-marketplace-qmcfq" Jan 26 13:09:12 crc kubenswrapper[4844]: I0126 13:09:12.195284 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6278c8a-eb85-4602-a53e-4bf48b69696f-catalog-content\") pod \"redhat-marketplace-qmcfq\" (UID: \"a6278c8a-eb85-4602-a53e-4bf48b69696f\") " pod="openshift-marketplace/redhat-marketplace-qmcfq" Jan 26 13:09:12 crc kubenswrapper[4844]: I0126 13:09:12.296225 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d7lf\" (UniqueName: \"kubernetes.io/projected/a6278c8a-eb85-4602-a53e-4bf48b69696f-kube-api-access-7d7lf\") pod \"redhat-marketplace-qmcfq\" (UID: \"a6278c8a-eb85-4602-a53e-4bf48b69696f\") " pod="openshift-marketplace/redhat-marketplace-qmcfq" Jan 26 13:09:12 crc kubenswrapper[4844]: I0126 13:09:12.296312 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6278c8a-eb85-4602-a53e-4bf48b69696f-utilities\") pod \"redhat-marketplace-qmcfq\" (UID: \"a6278c8a-eb85-4602-a53e-4bf48b69696f\") " pod="openshift-marketplace/redhat-marketplace-qmcfq" Jan 26 13:09:12 crc kubenswrapper[4844]: I0126 13:09:12.296352 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6278c8a-eb85-4602-a53e-4bf48b69696f-catalog-content\") pod \"redhat-marketplace-qmcfq\" (UID: \"a6278c8a-eb85-4602-a53e-4bf48b69696f\") " pod="openshift-marketplace/redhat-marketplace-qmcfq" Jan 26 13:09:12 crc kubenswrapper[4844]: I0126 13:09:12.296871 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6278c8a-eb85-4602-a53e-4bf48b69696f-utilities\") pod \"redhat-marketplace-qmcfq\" (UID: \"a6278c8a-eb85-4602-a53e-4bf48b69696f\") " pod="openshift-marketplace/redhat-marketplace-qmcfq" Jan 26 13:09:12 crc kubenswrapper[4844]: I0126 13:09:12.297035 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6278c8a-eb85-4602-a53e-4bf48b69696f-catalog-content\") pod \"redhat-marketplace-qmcfq\" (UID: \"a6278c8a-eb85-4602-a53e-4bf48b69696f\") " pod="openshift-marketplace/redhat-marketplace-qmcfq" Jan 26 13:09:12 crc kubenswrapper[4844]: I0126 13:09:12.320242 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d7lf\" (UniqueName: \"kubernetes.io/projected/a6278c8a-eb85-4602-a53e-4bf48b69696f-kube-api-access-7d7lf\") pod \"redhat-marketplace-qmcfq\" (UID: \"a6278c8a-eb85-4602-a53e-4bf48b69696f\") " pod="openshift-marketplace/redhat-marketplace-qmcfq" Jan 26 13:09:12 crc kubenswrapper[4844]: I0126 13:09:12.414935 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmcfq" Jan 26 13:09:12 crc kubenswrapper[4844]: I0126 13:09:12.633608 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmcfq"] Jan 26 13:09:13 crc kubenswrapper[4844]: I0126 13:09:13.129827 4844 generic.go:334] "Generic (PLEG): container finished" podID="a6278c8a-eb85-4602-a53e-4bf48b69696f" containerID="54b601bad35cd287900e33ed6624dae736231be9c172d1d70a0e5af1cb9a3571" exitCode=0 Jan 26 13:09:13 crc kubenswrapper[4844]: I0126 13:09:13.129941 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmcfq" event={"ID":"a6278c8a-eb85-4602-a53e-4bf48b69696f","Type":"ContainerDied","Data":"54b601bad35cd287900e33ed6624dae736231be9c172d1d70a0e5af1cb9a3571"} Jan 26 13:09:13 crc kubenswrapper[4844]: I0126 13:09:13.130184 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmcfq" event={"ID":"a6278c8a-eb85-4602-a53e-4bf48b69696f","Type":"ContainerStarted","Data":"75b3762aec7656d36c438fb70bf4aeda52fccccdcdf4f622d1aef8ddbf5c666f"} Jan 26 13:09:13 crc kubenswrapper[4844]: I0126 13:09:13.132095 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 13:09:15 crc kubenswrapper[4844]: I0126 13:09:15.148569 4844 generic.go:334] "Generic (PLEG): container finished" podID="a6278c8a-eb85-4602-a53e-4bf48b69696f" containerID="455d0585a506175180b5ea281b8ef9c38c2f5065e18e0b227d0ea0467d7535de" exitCode=0 Jan 26 13:09:15 crc kubenswrapper[4844]: I0126 13:09:15.149025 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmcfq" event={"ID":"a6278c8a-eb85-4602-a53e-4bf48b69696f","Type":"ContainerDied","Data":"455d0585a506175180b5ea281b8ef9c38c2f5065e18e0b227d0ea0467d7535de"} Jan 26 13:09:16 crc kubenswrapper[4844]: I0126 13:09:16.159861 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmcfq" event={"ID":"a6278c8a-eb85-4602-a53e-4bf48b69696f","Type":"ContainerStarted","Data":"9571adbd694c2dafcd8a7bec6cb1009f038fdcfb6a4cb2e51bbcfa8cc11f6654"} Jan 26 13:09:16 crc kubenswrapper[4844]: I0126 13:09:16.194506 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qmcfq" podStartSLOduration=1.763251451 podStartE2EDuration="4.194479684s" podCreationTimestamp="2026-01-26 13:09:12 +0000 UTC" firstStartedPulling="2026-01-26 13:09:13.131574304 +0000 UTC m=+1530.064941956" lastFinishedPulling="2026-01-26 13:09:15.562802547 +0000 UTC m=+1532.496170189" observedRunningTime="2026-01-26 13:09:16.188138164 +0000 UTC m=+1533.121505816" watchObservedRunningTime="2026-01-26 13:09:16.194479684 +0000 UTC m=+1533.127847336" Jan 26 13:09:22 crc kubenswrapper[4844]: I0126 13:09:22.415655 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qmcfq" Jan 26 13:09:22 crc kubenswrapper[4844]: I0126 13:09:22.416635 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qmcfq" Jan 26 13:09:22 crc kubenswrapper[4844]: I0126 13:09:22.460781 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qmcfq" Jan 26 13:09:23 crc kubenswrapper[4844]: I0126 13:09:23.255538 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qmcfq" Jan 26 13:09:23 crc kubenswrapper[4844]: I0126 13:09:23.309137 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmcfq"] Jan 26 13:09:25 crc kubenswrapper[4844]: I0126 13:09:25.219791 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qmcfq" podUID="a6278c8a-eb85-4602-a53e-4bf48b69696f" containerName="registry-server" containerID="cri-o://9571adbd694c2dafcd8a7bec6cb1009f038fdcfb6a4cb2e51bbcfa8cc11f6654" gracePeriod=2 Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.150852 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmcfq" Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.179773 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6278c8a-eb85-4602-a53e-4bf48b69696f-utilities\") pod \"a6278c8a-eb85-4602-a53e-4bf48b69696f\" (UID: \"a6278c8a-eb85-4602-a53e-4bf48b69696f\") " Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.179874 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d7lf\" (UniqueName: \"kubernetes.io/projected/a6278c8a-eb85-4602-a53e-4bf48b69696f-kube-api-access-7d7lf\") pod \"a6278c8a-eb85-4602-a53e-4bf48b69696f\" (UID: \"a6278c8a-eb85-4602-a53e-4bf48b69696f\") " Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.179895 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6278c8a-eb85-4602-a53e-4bf48b69696f-catalog-content\") pod \"a6278c8a-eb85-4602-a53e-4bf48b69696f\" (UID: \"a6278c8a-eb85-4602-a53e-4bf48b69696f\") " Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.180611 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6278c8a-eb85-4602-a53e-4bf48b69696f-utilities" (OuterVolumeSpecName: "utilities") pod "a6278c8a-eb85-4602-a53e-4bf48b69696f" (UID: "a6278c8a-eb85-4602-a53e-4bf48b69696f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.185277 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6278c8a-eb85-4602-a53e-4bf48b69696f-kube-api-access-7d7lf" (OuterVolumeSpecName: "kube-api-access-7d7lf") pod "a6278c8a-eb85-4602-a53e-4bf48b69696f" (UID: "a6278c8a-eb85-4602-a53e-4bf48b69696f"). InnerVolumeSpecName "kube-api-access-7d7lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.213398 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6278c8a-eb85-4602-a53e-4bf48b69696f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6278c8a-eb85-4602-a53e-4bf48b69696f" (UID: "a6278c8a-eb85-4602-a53e-4bf48b69696f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.227500 4844 generic.go:334] "Generic (PLEG): container finished" podID="a6278c8a-eb85-4602-a53e-4bf48b69696f" containerID="9571adbd694c2dafcd8a7bec6cb1009f038fdcfb6a4cb2e51bbcfa8cc11f6654" exitCode=0 Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.227543 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmcfq" Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.227552 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmcfq" event={"ID":"a6278c8a-eb85-4602-a53e-4bf48b69696f","Type":"ContainerDied","Data":"9571adbd694c2dafcd8a7bec6cb1009f038fdcfb6a4cb2e51bbcfa8cc11f6654"} Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.227626 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmcfq" event={"ID":"a6278c8a-eb85-4602-a53e-4bf48b69696f","Type":"ContainerDied","Data":"75b3762aec7656d36c438fb70bf4aeda52fccccdcdf4f622d1aef8ddbf5c666f"} Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.227650 4844 scope.go:117] "RemoveContainer" containerID="9571adbd694c2dafcd8a7bec6cb1009f038fdcfb6a4cb2e51bbcfa8cc11f6654" Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.247292 4844 scope.go:117] "RemoveContainer" containerID="455d0585a506175180b5ea281b8ef9c38c2f5065e18e0b227d0ea0467d7535de" Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.264365 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmcfq"] Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.267789 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmcfq"] Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.281054 4844 scope.go:117] "RemoveContainer" containerID="54b601bad35cd287900e33ed6624dae736231be9c172d1d70a0e5af1cb9a3571" Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.281768 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6278c8a-eb85-4602-a53e-4bf48b69696f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.281804 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d7lf\" (UniqueName: \"kubernetes.io/projected/a6278c8a-eb85-4602-a53e-4bf48b69696f-kube-api-access-7d7lf\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.281817 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6278c8a-eb85-4602-a53e-4bf48b69696f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.310805 4844 scope.go:117] "RemoveContainer" containerID="9571adbd694c2dafcd8a7bec6cb1009f038fdcfb6a4cb2e51bbcfa8cc11f6654" Jan 26 13:09:26 crc kubenswrapper[4844]: E0126 13:09:26.312115 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9571adbd694c2dafcd8a7bec6cb1009f038fdcfb6a4cb2e51bbcfa8cc11f6654\": container with ID starting with 9571adbd694c2dafcd8a7bec6cb1009f038fdcfb6a4cb2e51bbcfa8cc11f6654 not found: ID does not exist" containerID="9571adbd694c2dafcd8a7bec6cb1009f038fdcfb6a4cb2e51bbcfa8cc11f6654" Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.312272 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9571adbd694c2dafcd8a7bec6cb1009f038fdcfb6a4cb2e51bbcfa8cc11f6654"} err="failed to get container status \"9571adbd694c2dafcd8a7bec6cb1009f038fdcfb6a4cb2e51bbcfa8cc11f6654\": rpc error: code = NotFound desc = could not find container \"9571adbd694c2dafcd8a7bec6cb1009f038fdcfb6a4cb2e51bbcfa8cc11f6654\": container with ID starting with 9571adbd694c2dafcd8a7bec6cb1009f038fdcfb6a4cb2e51bbcfa8cc11f6654 not found: ID does not exist" Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.312392 4844 scope.go:117] "RemoveContainer" containerID="455d0585a506175180b5ea281b8ef9c38c2f5065e18e0b227d0ea0467d7535de" Jan 26 13:09:26 crc kubenswrapper[4844]: E0126 13:09:26.313301 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"455d0585a506175180b5ea281b8ef9c38c2f5065e18e0b227d0ea0467d7535de\": container with ID starting with 455d0585a506175180b5ea281b8ef9c38c2f5065e18e0b227d0ea0467d7535de not found: ID does not exist" containerID="455d0585a506175180b5ea281b8ef9c38c2f5065e18e0b227d0ea0467d7535de" Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.313359 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"455d0585a506175180b5ea281b8ef9c38c2f5065e18e0b227d0ea0467d7535de"} err="failed to get container status \"455d0585a506175180b5ea281b8ef9c38c2f5065e18e0b227d0ea0467d7535de\": rpc error: code = NotFound desc = could not find container \"455d0585a506175180b5ea281b8ef9c38c2f5065e18e0b227d0ea0467d7535de\": container with ID starting with 455d0585a506175180b5ea281b8ef9c38c2f5065e18e0b227d0ea0467d7535de not found: ID does not exist" Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.313374 4844 scope.go:117] "RemoveContainer" containerID="54b601bad35cd287900e33ed6624dae736231be9c172d1d70a0e5af1cb9a3571" Jan 26 13:09:26 crc kubenswrapper[4844]: E0126 13:09:26.313790 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54b601bad35cd287900e33ed6624dae736231be9c172d1d70a0e5af1cb9a3571\": container with ID starting with 54b601bad35cd287900e33ed6624dae736231be9c172d1d70a0e5af1cb9a3571 not found: ID does not exist" containerID="54b601bad35cd287900e33ed6624dae736231be9c172d1d70a0e5af1cb9a3571" Jan 26 13:09:26 crc kubenswrapper[4844]: I0126 13:09:26.313809 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54b601bad35cd287900e33ed6624dae736231be9c172d1d70a0e5af1cb9a3571"} err="failed to get container status \"54b601bad35cd287900e33ed6624dae736231be9c172d1d70a0e5af1cb9a3571\": rpc error: code = NotFound desc = could not find container \"54b601bad35cd287900e33ed6624dae736231be9c172d1d70a0e5af1cb9a3571\": container with ID starting with 54b601bad35cd287900e33ed6624dae736231be9c172d1d70a0e5af1cb9a3571 not found: ID does not exist" Jan 26 13:09:27 crc kubenswrapper[4844]: I0126 13:09:27.322669 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6278c8a-eb85-4602-a53e-4bf48b69696f" path="/var/lib/kubelet/pods/a6278c8a-eb85-4602-a53e-4bf48b69696f/volumes" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.387124 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-vhvzj"] Jan 26 13:09:28 crc kubenswrapper[4844]: E0126 13:09:28.387645 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6278c8a-eb85-4602-a53e-4bf48b69696f" containerName="extract-content" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.387663 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6278c8a-eb85-4602-a53e-4bf48b69696f" containerName="extract-content" Jan 26 13:09:28 crc kubenswrapper[4844]: E0126 13:09:28.387682 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6278c8a-eb85-4602-a53e-4bf48b69696f" containerName="registry-server" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.387692 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6278c8a-eb85-4602-a53e-4bf48b69696f" containerName="registry-server" Jan 26 13:09:28 crc kubenswrapper[4844]: E0126 13:09:28.387707 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6278c8a-eb85-4602-a53e-4bf48b69696f" containerName="extract-utilities" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.387719 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6278c8a-eb85-4602-a53e-4bf48b69696f" containerName="extract-utilities" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.387862 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6278c8a-eb85-4602-a53e-4bf48b69696f" containerName="registry-server" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.388289 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vhvzj" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.393049 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-7xbzs"] Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.394054 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-7xbzs" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.395268 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.395988 4844 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-qjztp" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.396370 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.396621 4844 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-z2vhg" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.401580 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dv29d"] Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.402407 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dv29d" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.404339 4844 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-vq4lb" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.407169 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgwpq\" (UniqueName: \"kubernetes.io/projected/65d6aa35-f205-43c2-ad68-0bfa252093be-kube-api-access-bgwpq\") pod \"cert-manager-858654f9db-vhvzj\" (UID: \"65d6aa35-f205-43c2-ad68-0bfa252093be\") " pod="cert-manager/cert-manager-858654f9db-vhvzj" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.407273 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtqj4\" (UniqueName: \"kubernetes.io/projected/97f29a7d-977c-41c6-8756-d6e5d6a35875-kube-api-access-dtqj4\") pod \"cert-manager-webhook-687f57d79b-7xbzs\" (UID: \"97f29a7d-977c-41c6-8756-d6e5d6a35875\") " pod="cert-manager/cert-manager-webhook-687f57d79b-7xbzs" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.409947 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vhvzj"] Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.418755 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dv29d"] Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.426126 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-7xbzs"] Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.508251 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtqj4\" (UniqueName: \"kubernetes.io/projected/97f29a7d-977c-41c6-8756-d6e5d6a35875-kube-api-access-dtqj4\") pod \"cert-manager-webhook-687f57d79b-7xbzs\" (UID: \"97f29a7d-977c-41c6-8756-d6e5d6a35875\") " pod="cert-manager/cert-manager-webhook-687f57d79b-7xbzs" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.508334 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lznz4\" (UniqueName: \"kubernetes.io/projected/a25263f7-0e4e-4253-abe6-20b223dc600e-kube-api-access-lznz4\") pod \"cert-manager-cainjector-cf98fcc89-dv29d\" (UID: \"a25263f7-0e4e-4253-abe6-20b223dc600e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dv29d" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.508360 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgwpq\" (UniqueName: \"kubernetes.io/projected/65d6aa35-f205-43c2-ad68-0bfa252093be-kube-api-access-bgwpq\") pod \"cert-manager-858654f9db-vhvzj\" (UID: \"65d6aa35-f205-43c2-ad68-0bfa252093be\") " pod="cert-manager/cert-manager-858654f9db-vhvzj" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.524833 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgwpq\" (UniqueName: \"kubernetes.io/projected/65d6aa35-f205-43c2-ad68-0bfa252093be-kube-api-access-bgwpq\") pod \"cert-manager-858654f9db-vhvzj\" (UID: \"65d6aa35-f205-43c2-ad68-0bfa252093be\") " pod="cert-manager/cert-manager-858654f9db-vhvzj" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.532110 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtqj4\" (UniqueName: \"kubernetes.io/projected/97f29a7d-977c-41c6-8756-d6e5d6a35875-kube-api-access-dtqj4\") pod \"cert-manager-webhook-687f57d79b-7xbzs\" (UID: \"97f29a7d-977c-41c6-8756-d6e5d6a35875\") " pod="cert-manager/cert-manager-webhook-687f57d79b-7xbzs" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.608935 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lznz4\" (UniqueName: \"kubernetes.io/projected/a25263f7-0e4e-4253-abe6-20b223dc600e-kube-api-access-lznz4\") pod \"cert-manager-cainjector-cf98fcc89-dv29d\" (UID: \"a25263f7-0e4e-4253-abe6-20b223dc600e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dv29d" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.637035 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lznz4\" (UniqueName: \"kubernetes.io/projected/a25263f7-0e4e-4253-abe6-20b223dc600e-kube-api-access-lznz4\") pod \"cert-manager-cainjector-cf98fcc89-dv29d\" (UID: \"a25263f7-0e4e-4253-abe6-20b223dc600e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dv29d" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.705863 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vhvzj" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.712647 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-7xbzs" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.720420 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dv29d" Jan 26 13:09:28 crc kubenswrapper[4844]: I0126 13:09:28.972777 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dv29d"] Jan 26 13:09:29 crc kubenswrapper[4844]: I0126 13:09:29.234808 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-7xbzs"] Jan 26 13:09:29 crc kubenswrapper[4844]: I0126 13:09:29.242382 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vhvzj"] Jan 26 13:09:29 crc kubenswrapper[4844]: I0126 13:09:29.252058 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dv29d" event={"ID":"a25263f7-0e4e-4253-abe6-20b223dc600e","Type":"ContainerStarted","Data":"04be2d750881c1a731aa76583159c06bc5cf5c5900f98bf561473035e5076e1c"} Jan 26 13:09:29 crc kubenswrapper[4844]: W0126 13:09:29.252986 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65d6aa35_f205_43c2_ad68_0bfa252093be.slice/crio-0826f2dcc61336ba73123d5482492b3655aa9a15bc6224eb07577165135cfb60 WatchSource:0}: Error finding container 0826f2dcc61336ba73123d5482492b3655aa9a15bc6224eb07577165135cfb60: Status 404 returned error can't find the container with id 0826f2dcc61336ba73123d5482492b3655aa9a15bc6224eb07577165135cfb60 Jan 26 13:09:29 crc kubenswrapper[4844]: I0126 13:09:29.253487 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-7xbzs" event={"ID":"97f29a7d-977c-41c6-8756-d6e5d6a35875","Type":"ContainerStarted","Data":"43a46d04d599425a942dc3d506501a404f59eca405cdc8303e033bf28733b7ca"} Jan 26 13:09:30 crc kubenswrapper[4844]: I0126 13:09:30.259854 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vhvzj" event={"ID":"65d6aa35-f205-43c2-ad68-0bfa252093be","Type":"ContainerStarted","Data":"0826f2dcc61336ba73123d5482492b3655aa9a15bc6224eb07577165135cfb60"} Jan 26 13:09:33 crc kubenswrapper[4844]: I0126 13:09:33.275013 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dv29d" event={"ID":"a25263f7-0e4e-4253-abe6-20b223dc600e","Type":"ContainerStarted","Data":"7a5a09eeb8d3d71352503b4b3b581f937027b6ef151358938cc05efced1e8551"} Jan 26 13:09:33 crc kubenswrapper[4844]: I0126 13:09:33.292177 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dv29d" podStartSLOduration=1.5808180410000001 podStartE2EDuration="5.292159533s" podCreationTimestamp="2026-01-26 13:09:28 +0000 UTC" firstStartedPulling="2026-01-26 13:09:28.981422266 +0000 UTC m=+1545.914789888" lastFinishedPulling="2026-01-26 13:09:32.692763768 +0000 UTC m=+1549.626131380" observedRunningTime="2026-01-26 13:09:33.291671141 +0000 UTC m=+1550.225038773" watchObservedRunningTime="2026-01-26 13:09:33.292159533 +0000 UTC m=+1550.225527155" Jan 26 13:09:35 crc kubenswrapper[4844]: I0126 13:09:35.293946 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-7xbzs" event={"ID":"97f29a7d-977c-41c6-8756-d6e5d6a35875","Type":"ContainerStarted","Data":"c0ae4953858ea27d02e8b20d89e2b6c2e929c419de8a8163bd5493c09f856f60"} Jan 26 13:09:35 crc kubenswrapper[4844]: I0126 13:09:35.294413 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-7xbzs" Jan 26 13:09:35 crc kubenswrapper[4844]: I0126 13:09:35.297374 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vhvzj" event={"ID":"65d6aa35-f205-43c2-ad68-0bfa252093be","Type":"ContainerStarted","Data":"f732c5f1b4026bb267306be959991f2fae015b3ebc5d4ceb617cb3c7544a9393"} Jan 26 13:09:35 crc kubenswrapper[4844]: I0126 13:09:35.331452 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-7xbzs" podStartSLOduration=2.231785792 podStartE2EDuration="7.331424456s" podCreationTimestamp="2026-01-26 13:09:28 +0000 UTC" firstStartedPulling="2026-01-26 13:09:29.246946502 +0000 UTC m=+1546.180314114" lastFinishedPulling="2026-01-26 13:09:34.346585116 +0000 UTC m=+1551.279952778" observedRunningTime="2026-01-26 13:09:35.32183974 +0000 UTC m=+1552.255207392" watchObservedRunningTime="2026-01-26 13:09:35.331424456 +0000 UTC m=+1552.264792108" Jan 26 13:09:35 crc kubenswrapper[4844]: I0126 13:09:35.344695 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-vhvzj" podStartSLOduration=2.25287359 podStartE2EDuration="7.34467465s" podCreationTimestamp="2026-01-26 13:09:28 +0000 UTC" firstStartedPulling="2026-01-26 13:09:29.255069323 +0000 UTC m=+1546.188436935" lastFinishedPulling="2026-01-26 13:09:34.346870343 +0000 UTC m=+1551.280237995" observedRunningTime="2026-01-26 13:09:35.341044634 +0000 UTC m=+1552.274412256" watchObservedRunningTime="2026-01-26 13:09:35.34467465 +0000 UTC m=+1552.278042272" Jan 26 13:09:36 crc kubenswrapper[4844]: I0126 13:09:36.365363 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:09:36 crc kubenswrapper[4844]: I0126 13:09:36.365801 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:09:36 crc kubenswrapper[4844]: I0126 13:09:36.365867 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 13:09:36 crc kubenswrapper[4844]: I0126 13:09:36.366836 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:09:36 crc kubenswrapper[4844]: I0126 13:09:36.366951 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" gracePeriod=600 Jan 26 13:09:36 crc kubenswrapper[4844]: E0126 13:09:36.518389 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:09:37 crc kubenswrapper[4844]: I0126 13:09:37.317896 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" exitCode=0 Jan 26 13:09:37 crc kubenswrapper[4844]: I0126 13:09:37.326683 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e"} Jan 26 13:09:37 crc kubenswrapper[4844]: I0126 13:09:37.326755 4844 scope.go:117] "RemoveContainer" containerID="40240f8fe0586673c0ae6562509c601ebaa4bbf7529663c2ad7f19e2cc1a7109" Jan 26 13:09:37 crc kubenswrapper[4844]: I0126 13:09:37.327250 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:09:37 crc kubenswrapper[4844]: E0126 13:09:37.327433 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.237556 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rlvx4"] Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.238112 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovn-controller" containerID="cri-o://03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7" gracePeriod=30 Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.238211 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="nbdb" containerID="cri-o://de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745" gracePeriod=30 Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.238258 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="northd" containerID="cri-o://d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9" gracePeriod=30 Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.238324 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2" gracePeriod=30 Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.238378 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="kube-rbac-proxy-node" containerID="cri-o://7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265" gracePeriod=30 Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.238415 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="sbdb" containerID="cri-o://dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d" gracePeriod=30 Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.238438 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovn-acl-logging" containerID="cri-o://9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a" gracePeriod=30 Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.305812 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovnkube-controller" containerID="cri-o://ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e" gracePeriod=30 Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.602162 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovnkube-controller/3.log" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.604778 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovn-acl-logging/0.log" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.605299 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovn-controller/0.log" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.605737 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.658070 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-systemd-units\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.658455 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/348a2956-fe61-43b9-858f-ab9c97a2985b-ovn-node-metrics-cert\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.659849 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-run-ovn-kubernetes\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.660046 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-log-socket\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.660360 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-node-log\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.660515 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/348a2956-fe61-43b9-858f-ab9c97a2985b-env-overrides\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.660766 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/348a2956-fe61-43b9-858f-ab9c97a2985b-ovnkube-config\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.660921 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-cni-netd\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.661056 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-var-lib-openvswitch\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.661232 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvtf5\" (UniqueName: \"kubernetes.io/projected/348a2956-fe61-43b9-858f-ab9c97a2985b-kube-api-access-cvtf5\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.661371 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-slash\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.661505 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-run-systemd\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.661679 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-etc-openvswitch\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.658224 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.660467 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-log-socket" (OuterVolumeSpecName: "log-socket") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.660521 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-node-log" (OuterVolumeSpecName: "node-log") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.661230 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/348a2956-fe61-43b9-858f-ab9c97a2985b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.661859 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/348a2956-fe61-43b9-858f-ab9c97a2985b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.661886 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/348a2956-fe61-43b9-858f-ab9c97a2985b-ovnkube-script-lib\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.662394 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-run-openvswitch\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.662537 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-cni-bin\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.662786 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.662960 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-run-ovn\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.663159 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-run-netns\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.663415 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-kubelet\") pod \"348a2956-fe61-43b9-858f-ab9c97a2985b\" (UID: \"348a2956-fe61-43b9-858f-ab9c97a2985b\") " Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.664194 4844 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.660024 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.662771 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.663466 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/348a2956-fe61-43b9-858f-ab9c97a2985b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.663503 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-slash" (OuterVolumeSpecName: "host-slash") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.664369 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.664375 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.664398 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.664422 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.664448 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.664438 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.664507 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.664501 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.665683 4844 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/348a2956-fe61-43b9-858f-ab9c97a2985b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.666884 4844 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.667024 4844 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/348a2956-fe61-43b9-858f-ab9c97a2985b-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.667222 4844 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.670820 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/348a2956-fe61-43b9-858f-ab9c97a2985b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.672557 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/348a2956-fe61-43b9-858f-ab9c97a2985b-kube-api-access-cvtf5" (OuterVolumeSpecName: "kube-api-access-cvtf5") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "kube-api-access-cvtf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.688557 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m5knn"] Jan 26 13:09:38 crc kubenswrapper[4844]: E0126 13:09:38.688921 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovnkube-controller" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.688937 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovnkube-controller" Jan 26 13:09:38 crc kubenswrapper[4844]: E0126 13:09:38.688951 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovnkube-controller" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.688959 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovnkube-controller" Jan 26 13:09:38 crc kubenswrapper[4844]: E0126 13:09:38.688970 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="sbdb" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.688979 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="sbdb" Jan 26 13:09:38 crc kubenswrapper[4844]: E0126 13:09:38.688987 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.688995 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 13:09:38 crc kubenswrapper[4844]: E0126 13:09:38.689007 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovnkube-controller" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689014 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovnkube-controller" Jan 26 13:09:38 crc kubenswrapper[4844]: E0126 13:09:38.689029 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="kubecfg-setup" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689037 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="kubecfg-setup" Jan 26 13:09:38 crc kubenswrapper[4844]: E0126 13:09:38.689047 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovn-controller" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689055 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovn-controller" Jan 26 13:09:38 crc kubenswrapper[4844]: E0126 13:09:38.689066 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovn-acl-logging" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689074 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovn-acl-logging" Jan 26 13:09:38 crc kubenswrapper[4844]: E0126 13:09:38.689086 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="nbdb" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689094 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="nbdb" Jan 26 13:09:38 crc kubenswrapper[4844]: E0126 13:09:38.689107 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="kube-rbac-proxy-node" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689115 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="kube-rbac-proxy-node" Jan 26 13:09:38 crc kubenswrapper[4844]: E0126 13:09:38.689126 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="northd" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689133 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="northd" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689232 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="northd" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689247 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovnkube-controller" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689257 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovn-acl-logging" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689265 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovn-controller" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689276 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689284 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="sbdb" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689292 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovnkube-controller" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689302 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovnkube-controller" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689311 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="kube-rbac-proxy-node" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689321 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovnkube-controller" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689331 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="nbdb" Jan 26 13:09:38 crc kubenswrapper[4844]: E0126 13:09:38.689429 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovnkube-controller" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689439 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovnkube-controller" Jan 26 13:09:38 crc kubenswrapper[4844]: E0126 13:09:38.689450 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovnkube-controller" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689458 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovnkube-controller" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689572 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerName="ovnkube-controller" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.689989 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "348a2956-fe61-43b9-858f-ab9c97a2985b" (UID: "348a2956-fe61-43b9-858f-ab9c97a2985b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.691399 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.767999 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-cni-bin\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.768123 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-kubelet\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.768159 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-run-openvswitch\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.768201 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-slash\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.768232 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-var-lib-openvswitch\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.768268 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-log-socket\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.768298 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckjrk\" (UniqueName: \"kubernetes.io/projected/8f86e600-c569-494a-ab11-7b624bf75257-kube-api-access-ckjrk\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.768388 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-run-netns\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.768444 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8f86e600-c569-494a-ab11-7b624bf75257-ovnkube-script-lib\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.768511 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-systemd-units\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.768536 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-run-ovn-kubernetes\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.768559 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.768586 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-node-log\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.768954 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8f86e600-c569-494a-ab11-7b624bf75257-ovn-node-metrics-cert\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769048 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-etc-openvswitch\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769106 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-run-systemd\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769149 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-run-ovn\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769180 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-cni-netd\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769205 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8f86e600-c569-494a-ab11-7b624bf75257-env-overrides\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769275 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8f86e600-c569-494a-ab11-7b624bf75257-ovnkube-config\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769447 4844 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769484 4844 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/348a2956-fe61-43b9-858f-ab9c97a2985b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769508 4844 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769527 4844 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769546 4844 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769568 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvtf5\" (UniqueName: \"kubernetes.io/projected/348a2956-fe61-43b9-858f-ab9c97a2985b-kube-api-access-cvtf5\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769586 4844 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769631 4844 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769654 4844 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769672 4844 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/348a2956-fe61-43b9-858f-ab9c97a2985b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769691 4844 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769709 4844 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769731 4844 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769753 4844 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.769771 4844 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/348a2956-fe61-43b9-858f-ab9c97a2985b-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.870626 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-run-ovn-kubernetes\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.870669 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.870688 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-node-log\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.870712 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8f86e600-c569-494a-ab11-7b624bf75257-ovn-node-metrics-cert\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.870737 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-etc-openvswitch\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.870757 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-run-systemd\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.870776 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-run-ovn\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.870774 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-run-ovn-kubernetes\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.870831 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-cni-netd\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.870830 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.870867 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-run-ovn\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.870869 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-etc-openvswitch\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.870868 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-node-log\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.870839 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-run-systemd\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.870795 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-cni-netd\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.870972 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8f86e600-c569-494a-ab11-7b624bf75257-env-overrides\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871058 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8f86e600-c569-494a-ab11-7b624bf75257-ovnkube-config\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871091 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-cni-bin\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871126 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-kubelet\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871165 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-cni-bin\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871163 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-run-openvswitch\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871200 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-run-openvswitch\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871206 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-slash\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871237 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-kubelet\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871221 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-slash\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871252 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-var-lib-openvswitch\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871292 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-log-socket\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871329 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckjrk\" (UniqueName: \"kubernetes.io/projected/8f86e600-c569-494a-ab11-7b624bf75257-kube-api-access-ckjrk\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871347 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-var-lib-openvswitch\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871366 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-run-netns\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871400 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8f86e600-c569-494a-ab11-7b624bf75257-ovnkube-script-lib\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871459 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-systemd-units\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871494 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8f86e600-c569-494a-ab11-7b624bf75257-env-overrides\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871373 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-log-socket\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871636 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-host-run-netns\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.871743 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8f86e600-c569-494a-ab11-7b624bf75257-systemd-units\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.872308 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8f86e600-c569-494a-ab11-7b624bf75257-ovnkube-config\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.872922 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8f86e600-c569-494a-ab11-7b624bf75257-ovnkube-script-lib\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.876122 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8f86e600-c569-494a-ab11-7b624bf75257-ovn-node-metrics-cert\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:38 crc kubenswrapper[4844]: I0126 13:09:38.888144 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckjrk\" (UniqueName: \"kubernetes.io/projected/8f86e600-c569-494a-ab11-7b624bf75257-kube-api-access-ckjrk\") pod \"ovnkube-node-m5knn\" (UID: \"8f86e600-c569-494a-ab11-7b624bf75257\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.007741 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.338073 4844 generic.go:334] "Generic (PLEG): container finished" podID="8f86e600-c569-494a-ab11-7b624bf75257" containerID="5c4ad6ff58bb6fbd7eb1b22a98933c6c98dddb496aa6c380e70266781d95f734" exitCode=0 Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.338210 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" event={"ID":"8f86e600-c569-494a-ab11-7b624bf75257","Type":"ContainerDied","Data":"5c4ad6ff58bb6fbd7eb1b22a98933c6c98dddb496aa6c380e70266781d95f734"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.338514 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" event={"ID":"8f86e600-c569-494a-ab11-7b624bf75257","Type":"ContainerStarted","Data":"34572d93d26cbf18001ab9116fb2fc74c4ee29b57df564ad2d44fbeaaafe47f0"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.341388 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovnkube-controller/3.log" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.343996 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovn-acl-logging/0.log" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.344514 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rlvx4_348a2956-fe61-43b9-858f-ab9c97a2985b/ovn-controller/0.log" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.344869 4844 generic.go:334] "Generic (PLEG): container finished" podID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerID="ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e" exitCode=0 Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.344893 4844 generic.go:334] "Generic (PLEG): container finished" podID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerID="dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d" exitCode=0 Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.344900 4844 generic.go:334] "Generic (PLEG): container finished" podID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerID="de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745" exitCode=0 Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.344911 4844 generic.go:334] "Generic (PLEG): container finished" podID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerID="d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9" exitCode=0 Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.344918 4844 generic.go:334] "Generic (PLEG): container finished" podID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerID="64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2" exitCode=0 Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.344925 4844 generic.go:334] "Generic (PLEG): container finished" podID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerID="7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265" exitCode=0 Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.344931 4844 generic.go:334] "Generic (PLEG): container finished" podID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerID="9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a" exitCode=143 Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.344938 4844 generic.go:334] "Generic (PLEG): container finished" podID="348a2956-fe61-43b9-858f-ab9c97a2985b" containerID="03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7" exitCode=143 Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.344986 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerDied","Data":"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345015 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerDied","Data":"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345028 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerDied","Data":"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345031 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345048 4844 scope.go:117] "RemoveContainer" containerID="ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345038 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerDied","Data":"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345181 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerDied","Data":"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345193 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerDied","Data":"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345205 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345216 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345222 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345228 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345233 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345239 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345245 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345250 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345256 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345263 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerDied","Data":"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345272 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345279 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345285 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345290 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345296 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345301 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345307 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345312 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345318 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345323 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345331 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerDied","Data":"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345338 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345346 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345351 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345357 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345362 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345368 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345374 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345379 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345386 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345391 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345398 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rlvx4" event={"ID":"348a2956-fe61-43b9-858f-ab9c97a2985b","Type":"ContainerDied","Data":"7674f5ff5bb5075f6bac48046c452ef62f888002046f90d70e9c4ac945d744a2"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345406 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345413 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345419 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345426 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345431 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345437 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345444 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345450 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345456 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.345462 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.347039 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zb9kx_467433a4-64be-4a14-beb2-657370e9865f/kube-multus/2.log" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.348118 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zb9kx_467433a4-64be-4a14-beb2-657370e9865f/kube-multus/1.log" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.348140 4844 generic.go:334] "Generic (PLEG): container finished" podID="467433a4-64be-4a14-beb2-657370e9865f" containerID="a9f5cfdf855b56723649119ff96f5158a782982b241f924bcc11eb87f705cc68" exitCode=2 Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.348156 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zb9kx" event={"ID":"467433a4-64be-4a14-beb2-657370e9865f","Type":"ContainerDied","Data":"a9f5cfdf855b56723649119ff96f5158a782982b241f924bcc11eb87f705cc68"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.348169 4844 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9be6a90cf1d7f75bb43391968d164c8726b7626d7dc649cd85f10c4d13424ab9"} Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.348491 4844 scope.go:117] "RemoveContainer" containerID="a9f5cfdf855b56723649119ff96f5158a782982b241f924bcc11eb87f705cc68" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.371027 4844 scope.go:117] "RemoveContainer" containerID="564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.417633 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rlvx4"] Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.422388 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rlvx4"] Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.441458 4844 scope.go:117] "RemoveContainer" containerID="dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.472707 4844 scope.go:117] "RemoveContainer" containerID="de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.506031 4844 scope.go:117] "RemoveContainer" containerID="d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.523654 4844 scope.go:117] "RemoveContainer" containerID="64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.552268 4844 scope.go:117] "RemoveContainer" containerID="7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.598895 4844 scope.go:117] "RemoveContainer" containerID="9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.616430 4844 scope.go:117] "RemoveContainer" containerID="03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.640719 4844 scope.go:117] "RemoveContainer" containerID="370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.656870 4844 scope.go:117] "RemoveContainer" containerID="ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e" Jan 26 13:09:39 crc kubenswrapper[4844]: E0126 13:09:39.657281 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e\": container with ID starting with ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e not found: ID does not exist" containerID="ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.657319 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e"} err="failed to get container status \"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e\": rpc error: code = NotFound desc = could not find container \"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e\": container with ID starting with ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.657344 4844 scope.go:117] "RemoveContainer" containerID="564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046" Jan 26 13:09:39 crc kubenswrapper[4844]: E0126 13:09:39.657547 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046\": container with ID starting with 564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046 not found: ID does not exist" containerID="564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.657574 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046"} err="failed to get container status \"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046\": rpc error: code = NotFound desc = could not find container \"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046\": container with ID starting with 564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.657608 4844 scope.go:117] "RemoveContainer" containerID="dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d" Jan 26 13:09:39 crc kubenswrapper[4844]: E0126 13:09:39.657807 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\": container with ID starting with dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d not found: ID does not exist" containerID="dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.657837 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d"} err="failed to get container status \"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\": rpc error: code = NotFound desc = could not find container \"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\": container with ID starting with dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.657850 4844 scope.go:117] "RemoveContainer" containerID="de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745" Jan 26 13:09:39 crc kubenswrapper[4844]: E0126 13:09:39.658037 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\": container with ID starting with de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745 not found: ID does not exist" containerID="de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.658067 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745"} err="failed to get container status \"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\": rpc error: code = NotFound desc = could not find container \"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\": container with ID starting with de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.658086 4844 scope.go:117] "RemoveContainer" containerID="d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9" Jan 26 13:09:39 crc kubenswrapper[4844]: E0126 13:09:39.658257 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\": container with ID starting with d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9 not found: ID does not exist" containerID="d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.658275 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9"} err="failed to get container status \"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\": rpc error: code = NotFound desc = could not find container \"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\": container with ID starting with d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.658287 4844 scope.go:117] "RemoveContainer" containerID="64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2" Jan 26 13:09:39 crc kubenswrapper[4844]: E0126 13:09:39.658454 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\": container with ID starting with 64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2 not found: ID does not exist" containerID="64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.658480 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2"} err="failed to get container status \"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\": rpc error: code = NotFound desc = could not find container \"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\": container with ID starting with 64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.658497 4844 scope.go:117] "RemoveContainer" containerID="7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265" Jan 26 13:09:39 crc kubenswrapper[4844]: E0126 13:09:39.658678 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\": container with ID starting with 7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265 not found: ID does not exist" containerID="7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.658716 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265"} err="failed to get container status \"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\": rpc error: code = NotFound desc = could not find container \"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\": container with ID starting with 7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.658727 4844 scope.go:117] "RemoveContainer" containerID="9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a" Jan 26 13:09:39 crc kubenswrapper[4844]: E0126 13:09:39.658880 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\": container with ID starting with 9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a not found: ID does not exist" containerID="9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.658897 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a"} err="failed to get container status \"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\": rpc error: code = NotFound desc = could not find container \"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\": container with ID starting with 9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.658908 4844 scope.go:117] "RemoveContainer" containerID="03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7" Jan 26 13:09:39 crc kubenswrapper[4844]: E0126 13:09:39.659057 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\": container with ID starting with 03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7 not found: ID does not exist" containerID="03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.659076 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7"} err="failed to get container status \"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\": rpc error: code = NotFound desc = could not find container \"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\": container with ID starting with 03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.659086 4844 scope.go:117] "RemoveContainer" containerID="370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02" Jan 26 13:09:39 crc kubenswrapper[4844]: E0126 13:09:39.659227 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\": container with ID starting with 370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02 not found: ID does not exist" containerID="370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.659246 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02"} err="failed to get container status \"370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\": rpc error: code = NotFound desc = could not find container \"370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\": container with ID starting with 370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.659257 4844 scope.go:117] "RemoveContainer" containerID="ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.659411 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e"} err="failed to get container status \"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e\": rpc error: code = NotFound desc = could not find container \"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e\": container with ID starting with ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.659438 4844 scope.go:117] "RemoveContainer" containerID="564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.659616 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046"} err="failed to get container status \"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046\": rpc error: code = NotFound desc = could not find container \"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046\": container with ID starting with 564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.659634 4844 scope.go:117] "RemoveContainer" containerID="dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.659824 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d"} err="failed to get container status \"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\": rpc error: code = NotFound desc = could not find container \"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\": container with ID starting with dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.659851 4844 scope.go:117] "RemoveContainer" containerID="de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.660033 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745"} err="failed to get container status \"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\": rpc error: code = NotFound desc = could not find container \"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\": container with ID starting with de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.660059 4844 scope.go:117] "RemoveContainer" containerID="d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.660244 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9"} err="failed to get container status \"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\": rpc error: code = NotFound desc = could not find container \"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\": container with ID starting with d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.660282 4844 scope.go:117] "RemoveContainer" containerID="64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.660467 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2"} err="failed to get container status \"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\": rpc error: code = NotFound desc = could not find container \"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\": container with ID starting with 64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.660503 4844 scope.go:117] "RemoveContainer" containerID="7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.660714 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265"} err="failed to get container status \"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\": rpc error: code = NotFound desc = could not find container \"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\": container with ID starting with 7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.660738 4844 scope.go:117] "RemoveContainer" containerID="9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.660900 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a"} err="failed to get container status \"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\": rpc error: code = NotFound desc = could not find container \"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\": container with ID starting with 9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.660915 4844 scope.go:117] "RemoveContainer" containerID="03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.661093 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7"} err="failed to get container status \"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\": rpc error: code = NotFound desc = could not find container \"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\": container with ID starting with 03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.661118 4844 scope.go:117] "RemoveContainer" containerID="370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.661419 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02"} err="failed to get container status \"370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\": rpc error: code = NotFound desc = could not find container \"370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\": container with ID starting with 370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.661461 4844 scope.go:117] "RemoveContainer" containerID="ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.661868 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e"} err="failed to get container status \"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e\": rpc error: code = NotFound desc = could not find container \"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e\": container with ID starting with ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.661898 4844 scope.go:117] "RemoveContainer" containerID="564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.662818 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046"} err="failed to get container status \"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046\": rpc error: code = NotFound desc = could not find container \"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046\": container with ID starting with 564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.662844 4844 scope.go:117] "RemoveContainer" containerID="dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.663079 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d"} err="failed to get container status \"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\": rpc error: code = NotFound desc = could not find container \"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\": container with ID starting with dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.663104 4844 scope.go:117] "RemoveContainer" containerID="de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.663422 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745"} err="failed to get container status \"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\": rpc error: code = NotFound desc = could not find container \"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\": container with ID starting with de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.663451 4844 scope.go:117] "RemoveContainer" containerID="d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.663777 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9"} err="failed to get container status \"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\": rpc error: code = NotFound desc = could not find container \"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\": container with ID starting with d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.663801 4844 scope.go:117] "RemoveContainer" containerID="64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.664176 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2"} err="failed to get container status \"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\": rpc error: code = NotFound desc = could not find container \"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\": container with ID starting with 64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.664225 4844 scope.go:117] "RemoveContainer" containerID="7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.664459 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265"} err="failed to get container status \"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\": rpc error: code = NotFound desc = could not find container \"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\": container with ID starting with 7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.664486 4844 scope.go:117] "RemoveContainer" containerID="9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.665178 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a"} err="failed to get container status \"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\": rpc error: code = NotFound desc = could not find container \"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\": container with ID starting with 9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.665202 4844 scope.go:117] "RemoveContainer" containerID="03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.665391 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7"} err="failed to get container status \"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\": rpc error: code = NotFound desc = could not find container \"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\": container with ID starting with 03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.665416 4844 scope.go:117] "RemoveContainer" containerID="370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.665842 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02"} err="failed to get container status \"370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\": rpc error: code = NotFound desc = could not find container \"370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\": container with ID starting with 370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.665864 4844 scope.go:117] "RemoveContainer" containerID="ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.666139 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e"} err="failed to get container status \"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e\": rpc error: code = NotFound desc = could not find container \"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e\": container with ID starting with ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.666354 4844 scope.go:117] "RemoveContainer" containerID="564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.667036 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046"} err="failed to get container status \"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046\": rpc error: code = NotFound desc = could not find container \"564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046\": container with ID starting with 564c6d757ba4ae6a21621d0fcfa3e8b5b5ef8cafd31af1ddc27d1826da04e046 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.667063 4844 scope.go:117] "RemoveContainer" containerID="dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.667351 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d"} err="failed to get container status \"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\": rpc error: code = NotFound desc = could not find container \"dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d\": container with ID starting with dbe92fbc9ce1d23484a9dca7eddb878b59da1fbe669098a82a798d41ee52e61d not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.667418 4844 scope.go:117] "RemoveContainer" containerID="de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.667775 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745"} err="failed to get container status \"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\": rpc error: code = NotFound desc = could not find container \"de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745\": container with ID starting with de371086349f06d5762291b0be4ba366d69376f96d55c2c781c414d9acaff745 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.667804 4844 scope.go:117] "RemoveContainer" containerID="d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.668216 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9"} err="failed to get container status \"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\": rpc error: code = NotFound desc = could not find container \"d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9\": container with ID starting with d2cfd5586a3073d55abd3f41fb92679c4f86ea8caeea2d9a8ca42515277750a9 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.668242 4844 scope.go:117] "RemoveContainer" containerID="64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.668686 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2"} err="failed to get container status \"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\": rpc error: code = NotFound desc = could not find container \"64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2\": container with ID starting with 64a97ef8aa10641683ab96943f4c1a54452ee34b28c854dd7902fc075a70e7f2 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.668717 4844 scope.go:117] "RemoveContainer" containerID="7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.669062 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265"} err="failed to get container status \"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\": rpc error: code = NotFound desc = could not find container \"7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265\": container with ID starting with 7d7af767e8f476993661bb886e421e7dd2fbd11c07865d6e56a8595425fe4265 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.669087 4844 scope.go:117] "RemoveContainer" containerID="9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.669359 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a"} err="failed to get container status \"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\": rpc error: code = NotFound desc = could not find container \"9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a\": container with ID starting with 9a16b2c8da5ad6d42bd5ce77ca31794858e04a05060c5bb0538dd099b1d5dd6a not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.669387 4844 scope.go:117] "RemoveContainer" containerID="03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.669704 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7"} err="failed to get container status \"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\": rpc error: code = NotFound desc = could not find container \"03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7\": container with ID starting with 03f4ffe2ba632bdfb8c45133ef267a29fa660e3aa85dfd99934ded60d32772f7 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.669730 4844 scope.go:117] "RemoveContainer" containerID="370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.669958 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02"} err="failed to get container status \"370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\": rpc error: code = NotFound desc = could not find container \"370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02\": container with ID starting with 370a02984d12e09b879a5f3af8d9fe26b40d438d558e51c2d352676f29198a02 not found: ID does not exist" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.669984 4844 scope.go:117] "RemoveContainer" containerID="ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e" Jan 26 13:09:39 crc kubenswrapper[4844]: I0126 13:09:39.670220 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e"} err="failed to get container status \"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e\": rpc error: code = NotFound desc = could not find container \"ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e\": container with ID starting with ac85315b0d74ee35d1c9a356648480e02f7d36838cdf0d10e276bb2e15c6205e not found: ID does not exist" Jan 26 13:09:40 crc kubenswrapper[4844]: I0126 13:09:40.368234 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zb9kx_467433a4-64be-4a14-beb2-657370e9865f/kube-multus/2.log" Jan 26 13:09:40 crc kubenswrapper[4844]: I0126 13:09:40.370758 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zb9kx_467433a4-64be-4a14-beb2-657370e9865f/kube-multus/1.log" Jan 26 13:09:40 crc kubenswrapper[4844]: I0126 13:09:40.370859 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zb9kx" event={"ID":"467433a4-64be-4a14-beb2-657370e9865f","Type":"ContainerStarted","Data":"bd5c689a4907ed2071088a3132f901c816fbd751d3be6c4a1318706470a5d339"} Jan 26 13:09:40 crc kubenswrapper[4844]: I0126 13:09:40.377362 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" event={"ID":"8f86e600-c569-494a-ab11-7b624bf75257","Type":"ContainerStarted","Data":"5a3552f77a7c9e5c713d6abb5102518fadc246f2aed879c2075bce7bece954c6"} Jan 26 13:09:40 crc kubenswrapper[4844]: I0126 13:09:40.377420 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" event={"ID":"8f86e600-c569-494a-ab11-7b624bf75257","Type":"ContainerStarted","Data":"e66620ce9375c7e6739cc074057c34e575e7a6e8b7d6d76e1450c26546087b07"} Jan 26 13:09:40 crc kubenswrapper[4844]: I0126 13:09:40.377441 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" event={"ID":"8f86e600-c569-494a-ab11-7b624bf75257","Type":"ContainerStarted","Data":"0b3f5373fadf68f20abce86264ccb1fa357dbe9b772b888478665a6ac03a909b"} Jan 26 13:09:40 crc kubenswrapper[4844]: I0126 13:09:40.377458 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" event={"ID":"8f86e600-c569-494a-ab11-7b624bf75257","Type":"ContainerStarted","Data":"a94cc8d87619449b2f594e5f949670fe85f9174044a33c169d1d59c47a6def59"} Jan 26 13:09:40 crc kubenswrapper[4844]: I0126 13:09:40.377474 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" event={"ID":"8f86e600-c569-494a-ab11-7b624bf75257","Type":"ContainerStarted","Data":"a8aa0a5e2420ae055c2d44213e9a0c8bfbd9b313f31d44ca235060a152a98203"} Jan 26 13:09:40 crc kubenswrapper[4844]: I0126 13:09:40.377490 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" event={"ID":"8f86e600-c569-494a-ab11-7b624bf75257","Type":"ContainerStarted","Data":"efc0c98d344bdca0f39f60616da6a4d70c6d56e0ff7b9cae3fa38a4827ab0632"} Jan 26 13:09:41 crc kubenswrapper[4844]: I0126 13:09:41.321075 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="348a2956-fe61-43b9-858f-ab9c97a2985b" path="/var/lib/kubelet/pods/348a2956-fe61-43b9-858f-ab9c97a2985b/volumes" Jan 26 13:09:42 crc kubenswrapper[4844]: I0126 13:09:42.417683 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" event={"ID":"8f86e600-c569-494a-ab11-7b624bf75257","Type":"ContainerStarted","Data":"6cf2e759174f45dbe7c69a12822f99a66afb5458d997e783afa7ef47f1213d39"} Jan 26 13:09:43 crc kubenswrapper[4844]: I0126 13:09:43.716242 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-7xbzs" Jan 26 13:09:44 crc kubenswrapper[4844]: I0126 13:09:44.038291 4844 scope.go:117] "RemoveContainer" containerID="9be6a90cf1d7f75bb43391968d164c8726b7626d7dc649cd85f10c4d13424ab9" Jan 26 13:09:44 crc kubenswrapper[4844]: I0126 13:09:44.436390 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zb9kx_467433a4-64be-4a14-beb2-657370e9865f/kube-multus/2.log" Jan 26 13:09:45 crc kubenswrapper[4844]: I0126 13:09:45.445710 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" event={"ID":"8f86e600-c569-494a-ab11-7b624bf75257","Type":"ContainerStarted","Data":"9e3a66830719ffab1e43024bc3de90f174214ac1c4ad921723d891f78b38baa4"} Jan 26 13:09:45 crc kubenswrapper[4844]: I0126 13:09:45.446154 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:45 crc kubenswrapper[4844]: I0126 13:09:45.446192 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:45 crc kubenswrapper[4844]: I0126 13:09:45.446205 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:45 crc kubenswrapper[4844]: I0126 13:09:45.477152 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" podStartSLOduration=7.477133765 podStartE2EDuration="7.477133765s" podCreationTimestamp="2026-01-26 13:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:09:45.472681547 +0000 UTC m=+1562.406049179" watchObservedRunningTime="2026-01-26 13:09:45.477133765 +0000 UTC m=+1562.410501387" Jan 26 13:09:45 crc kubenswrapper[4844]: I0126 13:09:45.487050 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:45 crc kubenswrapper[4844]: I0126 13:09:45.487120 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:09:51 crc kubenswrapper[4844]: I0126 13:09:51.313774 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:09:51 crc kubenswrapper[4844]: E0126 13:09:51.314814 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:10:05 crc kubenswrapper[4844]: I0126 13:10:05.313914 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:10:05 crc kubenswrapper[4844]: E0126 13:10:05.314914 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:10:09 crc kubenswrapper[4844]: I0126 13:10:09.042899 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m5knn" Jan 26 13:10:17 crc kubenswrapper[4844]: I0126 13:10:17.313174 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:10:17 crc kubenswrapper[4844]: E0126 13:10:17.313711 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:10:20 crc kubenswrapper[4844]: I0126 13:10:20.078693 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh"] Jan 26 13:10:20 crc kubenswrapper[4844]: I0126 13:10:20.079972 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" Jan 26 13:10:20 crc kubenswrapper[4844]: I0126 13:10:20.085274 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 13:10:20 crc kubenswrapper[4844]: I0126 13:10:20.091049 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh"] Jan 26 13:10:20 crc kubenswrapper[4844]: I0126 13:10:20.216588 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fht4k\" (UniqueName: \"kubernetes.io/projected/bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc-kube-api-access-fht4k\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh\" (UID: \"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" Jan 26 13:10:20 crc kubenswrapper[4844]: I0126 13:10:20.216704 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh\" (UID: \"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" Jan 26 13:10:20 crc kubenswrapper[4844]: I0126 13:10:20.216765 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh\" (UID: \"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" Jan 26 13:10:20 crc kubenswrapper[4844]: I0126 13:10:20.318559 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fht4k\" (UniqueName: \"kubernetes.io/projected/bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc-kube-api-access-fht4k\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh\" (UID: \"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" Jan 26 13:10:20 crc kubenswrapper[4844]: I0126 13:10:20.318632 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh\" (UID: \"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" Jan 26 13:10:20 crc kubenswrapper[4844]: I0126 13:10:20.318653 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh\" (UID: \"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" Jan 26 13:10:20 crc kubenswrapper[4844]: I0126 13:10:20.319061 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh\" (UID: \"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" Jan 26 13:10:20 crc kubenswrapper[4844]: I0126 13:10:20.319419 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh\" (UID: \"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" Jan 26 13:10:20 crc kubenswrapper[4844]: I0126 13:10:20.338550 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fht4k\" (UniqueName: \"kubernetes.io/projected/bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc-kube-api-access-fht4k\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh\" (UID: \"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" Jan 26 13:10:20 crc kubenswrapper[4844]: I0126 13:10:20.395524 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" Jan 26 13:10:20 crc kubenswrapper[4844]: I0126 13:10:20.668419 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh"] Jan 26 13:10:21 crc kubenswrapper[4844]: I0126 13:10:21.671051 4844 generic.go:334] "Generic (PLEG): container finished" podID="bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc" containerID="6092632926c7c05161bab6ee3fcd9818e92589ea6db14e149526da956476e846" exitCode=0 Jan 26 13:10:21 crc kubenswrapper[4844]: I0126 13:10:21.671162 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" event={"ID":"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc","Type":"ContainerDied","Data":"6092632926c7c05161bab6ee3fcd9818e92589ea6db14e149526da956476e846"} Jan 26 13:10:21 crc kubenswrapper[4844]: I0126 13:10:21.671532 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" event={"ID":"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc","Type":"ContainerStarted","Data":"29deb0ad3adbcbe57c892aaade4da3c26c1d2eb28e84713513663c4d7325d5ba"} Jan 26 13:10:22 crc kubenswrapper[4844]: I0126 13:10:22.432158 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l729g"] Jan 26 13:10:22 crc kubenswrapper[4844]: I0126 13:10:22.434546 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l729g" Jan 26 13:10:22 crc kubenswrapper[4844]: I0126 13:10:22.452136 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l729g"] Jan 26 13:10:22 crc kubenswrapper[4844]: I0126 13:10:22.546558 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cace70b8-0b61-447f-a677-8fd4f9fa5fd2-utilities\") pod \"redhat-operators-l729g\" (UID: \"cace70b8-0b61-447f-a677-8fd4f9fa5fd2\") " pod="openshift-marketplace/redhat-operators-l729g" Jan 26 13:10:22 crc kubenswrapper[4844]: I0126 13:10:22.546826 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zrch\" (UniqueName: \"kubernetes.io/projected/cace70b8-0b61-447f-a677-8fd4f9fa5fd2-kube-api-access-5zrch\") pod \"redhat-operators-l729g\" (UID: \"cace70b8-0b61-447f-a677-8fd4f9fa5fd2\") " pod="openshift-marketplace/redhat-operators-l729g" Jan 26 13:10:22 crc kubenswrapper[4844]: I0126 13:10:22.546898 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cace70b8-0b61-447f-a677-8fd4f9fa5fd2-catalog-content\") pod \"redhat-operators-l729g\" (UID: \"cace70b8-0b61-447f-a677-8fd4f9fa5fd2\") " pod="openshift-marketplace/redhat-operators-l729g" Jan 26 13:10:22 crc kubenswrapper[4844]: I0126 13:10:22.648577 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zrch\" (UniqueName: \"kubernetes.io/projected/cace70b8-0b61-447f-a677-8fd4f9fa5fd2-kube-api-access-5zrch\") pod \"redhat-operators-l729g\" (UID: \"cace70b8-0b61-447f-a677-8fd4f9fa5fd2\") " pod="openshift-marketplace/redhat-operators-l729g" Jan 26 13:10:22 crc kubenswrapper[4844]: I0126 13:10:22.648676 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cace70b8-0b61-447f-a677-8fd4f9fa5fd2-catalog-content\") pod \"redhat-operators-l729g\" (UID: \"cace70b8-0b61-447f-a677-8fd4f9fa5fd2\") " pod="openshift-marketplace/redhat-operators-l729g" Jan 26 13:10:22 crc kubenswrapper[4844]: I0126 13:10:22.648736 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cace70b8-0b61-447f-a677-8fd4f9fa5fd2-utilities\") pod \"redhat-operators-l729g\" (UID: \"cace70b8-0b61-447f-a677-8fd4f9fa5fd2\") " pod="openshift-marketplace/redhat-operators-l729g" Jan 26 13:10:22 crc kubenswrapper[4844]: I0126 13:10:22.649428 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cace70b8-0b61-447f-a677-8fd4f9fa5fd2-utilities\") pod \"redhat-operators-l729g\" (UID: \"cace70b8-0b61-447f-a677-8fd4f9fa5fd2\") " pod="openshift-marketplace/redhat-operators-l729g" Jan 26 13:10:22 crc kubenswrapper[4844]: I0126 13:10:22.650230 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cace70b8-0b61-447f-a677-8fd4f9fa5fd2-catalog-content\") pod \"redhat-operators-l729g\" (UID: \"cace70b8-0b61-447f-a677-8fd4f9fa5fd2\") " pod="openshift-marketplace/redhat-operators-l729g" Jan 26 13:10:22 crc kubenswrapper[4844]: I0126 13:10:22.668800 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zrch\" (UniqueName: \"kubernetes.io/projected/cace70b8-0b61-447f-a677-8fd4f9fa5fd2-kube-api-access-5zrch\") pod \"redhat-operators-l729g\" (UID: \"cace70b8-0b61-447f-a677-8fd4f9fa5fd2\") " pod="openshift-marketplace/redhat-operators-l729g" Jan 26 13:10:22 crc kubenswrapper[4844]: I0126 13:10:22.770463 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l729g" Jan 26 13:10:23 crc kubenswrapper[4844]: I0126 13:10:23.015891 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l729g"] Jan 26 13:10:23 crc kubenswrapper[4844]: I0126 13:10:23.683471 4844 generic.go:334] "Generic (PLEG): container finished" podID="cace70b8-0b61-447f-a677-8fd4f9fa5fd2" containerID="b3bd6ddf068e6bd3c82138266ea0e91b6ef9a533c83de7695b1be0dac7f7d471" exitCode=0 Jan 26 13:10:23 crc kubenswrapper[4844]: I0126 13:10:23.683663 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l729g" event={"ID":"cace70b8-0b61-447f-a677-8fd4f9fa5fd2","Type":"ContainerDied","Data":"b3bd6ddf068e6bd3c82138266ea0e91b6ef9a533c83de7695b1be0dac7f7d471"} Jan 26 13:10:23 crc kubenswrapper[4844]: I0126 13:10:23.683770 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l729g" event={"ID":"cace70b8-0b61-447f-a677-8fd4f9fa5fd2","Type":"ContainerStarted","Data":"86f96e5bd72584871853b959ca6091576483fa1cdab3a4cdf1d11b6c1dad47d2"} Jan 26 13:10:25 crc kubenswrapper[4844]: I0126 13:10:25.700377 4844 generic.go:334] "Generic (PLEG): container finished" podID="bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc" containerID="d39419553e021d96cdf30220d0d60d4669da4f72551bd0346bd475115705c9a5" exitCode=0 Jan 26 13:10:25 crc kubenswrapper[4844]: I0126 13:10:25.700463 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" event={"ID":"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc","Type":"ContainerDied","Data":"d39419553e021d96cdf30220d0d60d4669da4f72551bd0346bd475115705c9a5"} Jan 26 13:10:26 crc kubenswrapper[4844]: I0126 13:10:26.719497 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" event={"ID":"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc","Type":"ContainerStarted","Data":"f7f1a05b9e766de563c253f1a76d120a1943f07d492d74e735ff999138bcd4f4"} Jan 26 13:10:26 crc kubenswrapper[4844]: I0126 13:10:26.722255 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l729g" event={"ID":"cace70b8-0b61-447f-a677-8fd4f9fa5fd2","Type":"ContainerStarted","Data":"e5005e8e72a069c3498e47926872f6c331a1f23f86dff3d1cf967cadf5e91728"} Jan 26 13:10:26 crc kubenswrapper[4844]: I0126 13:10:26.740546 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" podStartSLOduration=2.942473352 podStartE2EDuration="6.740527432s" podCreationTimestamp="2026-01-26 13:10:20 +0000 UTC" firstStartedPulling="2026-01-26 13:10:21.675041505 +0000 UTC m=+1598.608409157" lastFinishedPulling="2026-01-26 13:10:25.473095625 +0000 UTC m=+1602.406463237" observedRunningTime="2026-01-26 13:10:26.7395832 +0000 UTC m=+1603.672950852" watchObservedRunningTime="2026-01-26 13:10:26.740527432 +0000 UTC m=+1603.673895054" Jan 26 13:10:27 crc kubenswrapper[4844]: I0126 13:10:27.729901 4844 generic.go:334] "Generic (PLEG): container finished" podID="bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc" containerID="f7f1a05b9e766de563c253f1a76d120a1943f07d492d74e735ff999138bcd4f4" exitCode=0 Jan 26 13:10:27 crc kubenswrapper[4844]: I0126 13:10:27.729965 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" event={"ID":"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc","Type":"ContainerDied","Data":"f7f1a05b9e766de563c253f1a76d120a1943f07d492d74e735ff999138bcd4f4"} Jan 26 13:10:27 crc kubenswrapper[4844]: I0126 13:10:27.732064 4844 generic.go:334] "Generic (PLEG): container finished" podID="cace70b8-0b61-447f-a677-8fd4f9fa5fd2" containerID="e5005e8e72a069c3498e47926872f6c331a1f23f86dff3d1cf967cadf5e91728" exitCode=0 Jan 26 13:10:27 crc kubenswrapper[4844]: I0126 13:10:27.732095 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l729g" event={"ID":"cace70b8-0b61-447f-a677-8fd4f9fa5fd2","Type":"ContainerDied","Data":"e5005e8e72a069c3498e47926872f6c331a1f23f86dff3d1cf967cadf5e91728"} Jan 26 13:10:28 crc kubenswrapper[4844]: I0126 13:10:28.742882 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l729g" event={"ID":"cace70b8-0b61-447f-a677-8fd4f9fa5fd2","Type":"ContainerStarted","Data":"f6aea8b4d5cc97fe56ae7bfa2ca4623ce823a8fc6e42f195df62ee127ac7ba19"} Jan 26 13:10:28 crc kubenswrapper[4844]: I0126 13:10:28.766766 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l729g" podStartSLOduration=3.405759691 podStartE2EDuration="6.766744007s" podCreationTimestamp="2026-01-26 13:10:22 +0000 UTC" firstStartedPulling="2026-01-26 13:10:24.964878286 +0000 UTC m=+1601.898245928" lastFinishedPulling="2026-01-26 13:10:28.325862602 +0000 UTC m=+1605.259230244" observedRunningTime="2026-01-26 13:10:28.765088587 +0000 UTC m=+1605.698456249" watchObservedRunningTime="2026-01-26 13:10:28.766744007 +0000 UTC m=+1605.700111639" Jan 26 13:10:29 crc kubenswrapper[4844]: I0126 13:10:29.003590 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" Jan 26 13:10:29 crc kubenswrapper[4844]: I0126 13:10:29.036131 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fht4k\" (UniqueName: \"kubernetes.io/projected/bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc-kube-api-access-fht4k\") pod \"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc\" (UID: \"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc\") " Jan 26 13:10:29 crc kubenswrapper[4844]: I0126 13:10:29.036230 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc-bundle\") pod \"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc\" (UID: \"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc\") " Jan 26 13:10:29 crc kubenswrapper[4844]: I0126 13:10:29.036283 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc-util\") pod \"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc\" (UID: \"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc\") " Jan 26 13:10:29 crc kubenswrapper[4844]: I0126 13:10:29.042483 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc-kube-api-access-fht4k" (OuterVolumeSpecName: "kube-api-access-fht4k") pod "bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc" (UID: "bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc"). InnerVolumeSpecName "kube-api-access-fht4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:10:29 crc kubenswrapper[4844]: I0126 13:10:29.042831 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc-bundle" (OuterVolumeSpecName: "bundle") pod "bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc" (UID: "bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:10:29 crc kubenswrapper[4844]: I0126 13:10:29.047961 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc-util" (OuterVolumeSpecName: "util") pod "bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc" (UID: "bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:10:29 crc kubenswrapper[4844]: I0126 13:10:29.138145 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fht4k\" (UniqueName: \"kubernetes.io/projected/bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc-kube-api-access-fht4k\") on node \"crc\" DevicePath \"\"" Jan 26 13:10:29 crc kubenswrapper[4844]: I0126 13:10:29.138459 4844 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:10:29 crc kubenswrapper[4844]: I0126 13:10:29.138468 4844 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc-util\") on node \"crc\" DevicePath \"\"" Jan 26 13:10:29 crc kubenswrapper[4844]: I0126 13:10:29.751915 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" Jan 26 13:10:29 crc kubenswrapper[4844]: I0126 13:10:29.751903 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh" event={"ID":"bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc","Type":"ContainerDied","Data":"29deb0ad3adbcbe57c892aaade4da3c26c1d2eb28e84713513663c4d7325d5ba"} Jan 26 13:10:29 crc kubenswrapper[4844]: I0126 13:10:29.752003 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29deb0ad3adbcbe57c892aaade4da3c26c1d2eb28e84713513663c4d7325d5ba" Jan 26 13:10:32 crc kubenswrapper[4844]: I0126 13:10:32.312857 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:10:32 crc kubenswrapper[4844]: E0126 13:10:32.313831 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:10:32 crc kubenswrapper[4844]: I0126 13:10:32.770898 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l729g" Jan 26 13:10:32 crc kubenswrapper[4844]: I0126 13:10:32.772794 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l729g" Jan 26 13:10:33 crc kubenswrapper[4844]: I0126 13:10:33.846247 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l729g" podUID="cace70b8-0b61-447f-a677-8fd4f9fa5fd2" containerName="registry-server" probeResult="failure" output=< Jan 26 13:10:33 crc kubenswrapper[4844]: timeout: failed to connect service ":50051" within 1s Jan 26 13:10:33 crc kubenswrapper[4844]: > Jan 26 13:10:37 crc kubenswrapper[4844]: I0126 13:10:37.812212 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-dg7zb"] Jan 26 13:10:37 crc kubenswrapper[4844]: E0126 13:10:37.812817 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc" containerName="extract" Jan 26 13:10:37 crc kubenswrapper[4844]: I0126 13:10:37.812833 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc" containerName="extract" Jan 26 13:10:37 crc kubenswrapper[4844]: E0126 13:10:37.812855 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc" containerName="util" Jan 26 13:10:37 crc kubenswrapper[4844]: I0126 13:10:37.812863 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc" containerName="util" Jan 26 13:10:37 crc kubenswrapper[4844]: E0126 13:10:37.812873 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc" containerName="pull" Jan 26 13:10:37 crc kubenswrapper[4844]: I0126 13:10:37.812880 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc" containerName="pull" Jan 26 13:10:37 crc kubenswrapper[4844]: I0126 13:10:37.812993 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc" containerName="extract" Jan 26 13:10:37 crc kubenswrapper[4844]: I0126 13:10:37.813445 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-dg7zb" Jan 26 13:10:37 crc kubenswrapper[4844]: I0126 13:10:37.815758 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-cs4rg" Jan 26 13:10:37 crc kubenswrapper[4844]: I0126 13:10:37.816000 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 26 13:10:37 crc kubenswrapper[4844]: I0126 13:10:37.815786 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 26 13:10:37 crc kubenswrapper[4844]: I0126 13:10:37.838721 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-dg7zb"] Jan 26 13:10:37 crc kubenswrapper[4844]: I0126 13:10:37.943356 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-mvsq5"] Jan 26 13:10:37 crc kubenswrapper[4844]: I0126 13:10:37.944243 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-mvsq5" Jan 26 13:10:37 crc kubenswrapper[4844]: W0126 13:10:37.946293 4844 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert": failed to list *v1.Secret: secrets "obo-prometheus-operator-admission-webhook-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Jan 26 13:10:37 crc kubenswrapper[4844]: E0126 13:10:37.946332 4844 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"obo-prometheus-operator-admission-webhook-service-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 13:10:37 crc kubenswrapper[4844]: I0126 13:10:37.947510 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-jqcc5" Jan 26 13:10:37 crc kubenswrapper[4844]: I0126 13:10:37.957375 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-mvsq5"] Jan 26 13:10:37 crc kubenswrapper[4844]: I0126 13:10:37.961172 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv9lt\" (UniqueName: \"kubernetes.io/projected/1dec1dad-33cd-4ea8-9f69-9e69e0f56e73-kube-api-access-gv9lt\") pod \"obo-prometheus-operator-68bc856cb9-dg7zb\" (UID: \"1dec1dad-33cd-4ea8-9f69-9e69e0f56e73\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-dg7zb" Jan 26 13:10:37 crc kubenswrapper[4844]: I0126 13:10:37.961882 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-68hvv"] Jan 26 13:10:37 crc kubenswrapper[4844]: I0126 13:10:37.962482 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-68hvv" Jan 26 13:10:37 crc kubenswrapper[4844]: I0126 13:10:37.988919 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-68hvv"] Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.062502 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b2533187-bdf5-44b9-a05d-ceb2e2ea467b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b87948799-mvsq5\" (UID: \"b2533187-bdf5-44b9-a05d-ceb2e2ea467b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-mvsq5" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.062789 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv9lt\" (UniqueName: \"kubernetes.io/projected/1dec1dad-33cd-4ea8-9f69-9e69e0f56e73-kube-api-access-gv9lt\") pod \"obo-prometheus-operator-68bc856cb9-dg7zb\" (UID: \"1dec1dad-33cd-4ea8-9f69-9e69e0f56e73\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-dg7zb" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.062931 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2533187-bdf5-44b9-a05d-ceb2e2ea467b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b87948799-mvsq5\" (UID: \"b2533187-bdf5-44b9-a05d-ceb2e2ea467b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-mvsq5" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.063023 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/321b4c21-0d4a-49d5-a14a-9f49e2ea5600-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b87948799-68hvv\" (UID: \"321b4c21-0d4a-49d5-a14a-9f49e2ea5600\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-68hvv" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.063098 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/321b4c21-0d4a-49d5-a14a-9f49e2ea5600-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b87948799-68hvv\" (UID: \"321b4c21-0d4a-49d5-a14a-9f49e2ea5600\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-68hvv" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.085219 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv9lt\" (UniqueName: \"kubernetes.io/projected/1dec1dad-33cd-4ea8-9f69-9e69e0f56e73-kube-api-access-gv9lt\") pod \"obo-prometheus-operator-68bc856cb9-dg7zb\" (UID: \"1dec1dad-33cd-4ea8-9f69-9e69e0f56e73\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-dg7zb" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.133760 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-dg7zb" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.148698 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-clgj9"] Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.149387 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-clgj9" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.152351 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-tx89b" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.153557 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.164176 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/321b4c21-0d4a-49d5-a14a-9f49e2ea5600-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b87948799-68hvv\" (UID: \"321b4c21-0d4a-49d5-a14a-9f49e2ea5600\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-68hvv" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.164631 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/321b4c21-0d4a-49d5-a14a-9f49e2ea5600-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b87948799-68hvv\" (UID: \"321b4c21-0d4a-49d5-a14a-9f49e2ea5600\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-68hvv" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.164736 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b2533187-bdf5-44b9-a05d-ceb2e2ea467b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b87948799-mvsq5\" (UID: \"b2533187-bdf5-44b9-a05d-ceb2e2ea467b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-mvsq5" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.164979 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2533187-bdf5-44b9-a05d-ceb2e2ea467b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b87948799-mvsq5\" (UID: \"b2533187-bdf5-44b9-a05d-ceb2e2ea467b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-mvsq5" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.177842 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-clgj9"] Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.266239 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svmmx\" (UniqueName: \"kubernetes.io/projected/50efd8fd-16d6-4d82-a9f0-ea82c4d50c4c-kube-api-access-svmmx\") pod \"observability-operator-59bdc8b94-clgj9\" (UID: \"50efd8fd-16d6-4d82-a9f0-ea82c4d50c4c\") " pod="openshift-operators/observability-operator-59bdc8b94-clgj9" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.266290 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/50efd8fd-16d6-4d82-a9f0-ea82c4d50c4c-observability-operator-tls\") pod \"observability-operator-59bdc8b94-clgj9\" (UID: \"50efd8fd-16d6-4d82-a9f0-ea82c4d50c4c\") " pod="openshift-operators/observability-operator-59bdc8b94-clgj9" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.344374 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-sjw9j"] Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.345424 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-sjw9j" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.350114 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-js2td" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.367019 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-sjw9j"] Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.367061 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svmmx\" (UniqueName: \"kubernetes.io/projected/50efd8fd-16d6-4d82-a9f0-ea82c4d50c4c-kube-api-access-svmmx\") pod \"observability-operator-59bdc8b94-clgj9\" (UID: \"50efd8fd-16d6-4d82-a9f0-ea82c4d50c4c\") " pod="openshift-operators/observability-operator-59bdc8b94-clgj9" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.367090 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/50efd8fd-16d6-4d82-a9f0-ea82c4d50c4c-observability-operator-tls\") pod \"observability-operator-59bdc8b94-clgj9\" (UID: \"50efd8fd-16d6-4d82-a9f0-ea82c4d50c4c\") " pod="openshift-operators/observability-operator-59bdc8b94-clgj9" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.372059 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/50efd8fd-16d6-4d82-a9f0-ea82c4d50c4c-observability-operator-tls\") pod \"observability-operator-59bdc8b94-clgj9\" (UID: \"50efd8fd-16d6-4d82-a9f0-ea82c4d50c4c\") " pod="openshift-operators/observability-operator-59bdc8b94-clgj9" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.384458 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svmmx\" (UniqueName: \"kubernetes.io/projected/50efd8fd-16d6-4d82-a9f0-ea82c4d50c4c-kube-api-access-svmmx\") pod \"observability-operator-59bdc8b94-clgj9\" (UID: \"50efd8fd-16d6-4d82-a9f0-ea82c4d50c4c\") " pod="openshift-operators/observability-operator-59bdc8b94-clgj9" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.468487 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pfmc\" (UniqueName: \"kubernetes.io/projected/a9734a40-f918-40da-9931-7d55904a646a-kube-api-access-9pfmc\") pod \"perses-operator-5bf474d74f-sjw9j\" (UID: \"a9734a40-f918-40da-9931-7d55904a646a\") " pod="openshift-operators/perses-operator-5bf474d74f-sjw9j" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.468852 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9734a40-f918-40da-9931-7d55904a646a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-sjw9j\" (UID: \"a9734a40-f918-40da-9931-7d55904a646a\") " pod="openshift-operators/perses-operator-5bf474d74f-sjw9j" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.500203 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-clgj9" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.570490 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9734a40-f918-40da-9931-7d55904a646a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-sjw9j\" (UID: \"a9734a40-f918-40da-9931-7d55904a646a\") " pod="openshift-operators/perses-operator-5bf474d74f-sjw9j" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.570559 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pfmc\" (UniqueName: \"kubernetes.io/projected/a9734a40-f918-40da-9931-7d55904a646a-kube-api-access-9pfmc\") pod \"perses-operator-5bf474d74f-sjw9j\" (UID: \"a9734a40-f918-40da-9931-7d55904a646a\") " pod="openshift-operators/perses-operator-5bf474d74f-sjw9j" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.571710 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9734a40-f918-40da-9931-7d55904a646a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-sjw9j\" (UID: \"a9734a40-f918-40da-9931-7d55904a646a\") " pod="openshift-operators/perses-operator-5bf474d74f-sjw9j" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.601325 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pfmc\" (UniqueName: \"kubernetes.io/projected/a9734a40-f918-40da-9931-7d55904a646a-kube-api-access-9pfmc\") pod \"perses-operator-5bf474d74f-sjw9j\" (UID: \"a9734a40-f918-40da-9931-7d55904a646a\") " pod="openshift-operators/perses-operator-5bf474d74f-sjw9j" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.616258 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-dg7zb"] Jan 26 13:10:38 crc kubenswrapper[4844]: W0126 13:10:38.626674 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1dec1dad_33cd_4ea8_9f69_9e69e0f56e73.slice/crio-4f1c9a094d8db79044373e169c06999f811411dba999c373be91bfe5029a90a1 WatchSource:0}: Error finding container 4f1c9a094d8db79044373e169c06999f811411dba999c373be91bfe5029a90a1: Status 404 returned error can't find the container with id 4f1c9a094d8db79044373e169c06999f811411dba999c373be91bfe5029a90a1 Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.667131 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-sjw9j" Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.706394 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-clgj9"] Jan 26 13:10:38 crc kubenswrapper[4844]: W0126 13:10:38.713834 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50efd8fd_16d6_4d82_a9f0_ea82c4d50c4c.slice/crio-2ed3e0ab7e617d741d3fb96252d852852e0a74c2061c4cdf88823ebfd4c4e5f7 WatchSource:0}: Error finding container 2ed3e0ab7e617d741d3fb96252d852852e0a74c2061c4cdf88823ebfd4c4e5f7: Status 404 returned error can't find the container with id 2ed3e0ab7e617d741d3fb96252d852852e0a74c2061c4cdf88823ebfd4c4e5f7 Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.798620 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-dg7zb" event={"ID":"1dec1dad-33cd-4ea8-9f69-9e69e0f56e73","Type":"ContainerStarted","Data":"4f1c9a094d8db79044373e169c06999f811411dba999c373be91bfe5029a90a1"} Jan 26 13:10:38 crc kubenswrapper[4844]: I0126 13:10:38.801453 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-clgj9" event={"ID":"50efd8fd-16d6-4d82-a9f0-ea82c4d50c4c","Type":"ContainerStarted","Data":"2ed3e0ab7e617d741d3fb96252d852852e0a74c2061c4cdf88823ebfd4c4e5f7"} Jan 26 13:10:39 crc kubenswrapper[4844]: I0126 13:10:39.076324 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-sjw9j"] Jan 26 13:10:39 crc kubenswrapper[4844]: W0126 13:10:39.081111 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9734a40_f918_40da_9931_7d55904a646a.slice/crio-dba4e17efaeabf5105a7bd6ec820afeee2f153e8155da9e449307a142ea58466 WatchSource:0}: Error finding container dba4e17efaeabf5105a7bd6ec820afeee2f153e8155da9e449307a142ea58466: Status 404 returned error can't find the container with id dba4e17efaeabf5105a7bd6ec820afeee2f153e8155da9e449307a142ea58466 Jan 26 13:10:39 crc kubenswrapper[4844]: I0126 13:10:39.150739 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 26 13:10:39 crc kubenswrapper[4844]: I0126 13:10:39.162442 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/321b4c21-0d4a-49d5-a14a-9f49e2ea5600-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b87948799-68hvv\" (UID: \"321b4c21-0d4a-49d5-a14a-9f49e2ea5600\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-68hvv" Jan 26 13:10:39 crc kubenswrapper[4844]: I0126 13:10:39.163333 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2533187-bdf5-44b9-a05d-ceb2e2ea467b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b87948799-mvsq5\" (UID: \"b2533187-bdf5-44b9-a05d-ceb2e2ea467b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-mvsq5" Jan 26 13:10:39 crc kubenswrapper[4844]: I0126 13:10:39.164155 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b2533187-bdf5-44b9-a05d-ceb2e2ea467b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b87948799-mvsq5\" (UID: \"b2533187-bdf5-44b9-a05d-ceb2e2ea467b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-mvsq5" Jan 26 13:10:39 crc kubenswrapper[4844]: I0126 13:10:39.170308 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/321b4c21-0d4a-49d5-a14a-9f49e2ea5600-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b87948799-68hvv\" (UID: \"321b4c21-0d4a-49d5-a14a-9f49e2ea5600\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-68hvv" Jan 26 13:10:39 crc kubenswrapper[4844]: I0126 13:10:39.192057 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-68hvv" Jan 26 13:10:39 crc kubenswrapper[4844]: I0126 13:10:39.463003 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-mvsq5" Jan 26 13:10:39 crc kubenswrapper[4844]: I0126 13:10:39.651155 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-68hvv"] Jan 26 13:10:39 crc kubenswrapper[4844]: W0126 13:10:39.656227 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod321b4c21_0d4a_49d5_a14a_9f49e2ea5600.slice/crio-2717ab512fa42ec5e93249b8f6f3bc34dd01a9c4f00e5ac768887b0e2b5f2e1a WatchSource:0}: Error finding container 2717ab512fa42ec5e93249b8f6f3bc34dd01a9c4f00e5ac768887b0e2b5f2e1a: Status 404 returned error can't find the container with id 2717ab512fa42ec5e93249b8f6f3bc34dd01a9c4f00e5ac768887b0e2b5f2e1a Jan 26 13:10:39 crc kubenswrapper[4844]: I0126 13:10:39.688270 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-mvsq5"] Jan 26 13:10:39 crc kubenswrapper[4844]: W0126 13:10:39.692542 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2533187_bdf5_44b9_a05d_ceb2e2ea467b.slice/crio-af8d2bdc59cd5874c11611e5bc5e64dea3917dd58b7a2c7f2219969c8fff7689 WatchSource:0}: Error finding container af8d2bdc59cd5874c11611e5bc5e64dea3917dd58b7a2c7f2219969c8fff7689: Status 404 returned error can't find the container with id af8d2bdc59cd5874c11611e5bc5e64dea3917dd58b7a2c7f2219969c8fff7689 Jan 26 13:10:39 crc kubenswrapper[4844]: I0126 13:10:39.807865 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-sjw9j" event={"ID":"a9734a40-f918-40da-9931-7d55904a646a","Type":"ContainerStarted","Data":"dba4e17efaeabf5105a7bd6ec820afeee2f153e8155da9e449307a142ea58466"} Jan 26 13:10:39 crc kubenswrapper[4844]: I0126 13:10:39.808916 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-68hvv" event={"ID":"321b4c21-0d4a-49d5-a14a-9f49e2ea5600","Type":"ContainerStarted","Data":"2717ab512fa42ec5e93249b8f6f3bc34dd01a9c4f00e5ac768887b0e2b5f2e1a"} Jan 26 13:10:39 crc kubenswrapper[4844]: I0126 13:10:39.809878 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-mvsq5" event={"ID":"b2533187-bdf5-44b9-a05d-ceb2e2ea467b","Type":"ContainerStarted","Data":"af8d2bdc59cd5874c11611e5bc5e64dea3917dd58b7a2c7f2219969c8fff7689"} Jan 26 13:10:42 crc kubenswrapper[4844]: I0126 13:10:42.855760 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l729g" Jan 26 13:10:42 crc kubenswrapper[4844]: I0126 13:10:42.917278 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l729g" Jan 26 13:10:43 crc kubenswrapper[4844]: I0126 13:10:43.338928 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:10:43 crc kubenswrapper[4844]: E0126 13:10:43.339132 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:10:45 crc kubenswrapper[4844]: I0126 13:10:45.210786 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l729g"] Jan 26 13:10:45 crc kubenswrapper[4844]: I0126 13:10:45.211353 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l729g" podUID="cace70b8-0b61-447f-a677-8fd4f9fa5fd2" containerName="registry-server" containerID="cri-o://f6aea8b4d5cc97fe56ae7bfa2ca4623ce823a8fc6e42f195df62ee127ac7ba19" gracePeriod=2 Jan 26 13:10:45 crc kubenswrapper[4844]: I0126 13:10:45.877787 4844 generic.go:334] "Generic (PLEG): container finished" podID="cace70b8-0b61-447f-a677-8fd4f9fa5fd2" containerID="f6aea8b4d5cc97fe56ae7bfa2ca4623ce823a8fc6e42f195df62ee127ac7ba19" exitCode=0 Jan 26 13:10:45 crc kubenswrapper[4844]: I0126 13:10:45.877994 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l729g" event={"ID":"cace70b8-0b61-447f-a677-8fd4f9fa5fd2","Type":"ContainerDied","Data":"f6aea8b4d5cc97fe56ae7bfa2ca4623ce823a8fc6e42f195df62ee127ac7ba19"} Jan 26 13:10:47 crc kubenswrapper[4844]: I0126 13:10:47.100799 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l729g" Jan 26 13:10:47 crc kubenswrapper[4844]: I0126 13:10:47.214322 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zrch\" (UniqueName: \"kubernetes.io/projected/cace70b8-0b61-447f-a677-8fd4f9fa5fd2-kube-api-access-5zrch\") pod \"cace70b8-0b61-447f-a677-8fd4f9fa5fd2\" (UID: \"cace70b8-0b61-447f-a677-8fd4f9fa5fd2\") " Jan 26 13:10:47 crc kubenswrapper[4844]: I0126 13:10:47.214387 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cace70b8-0b61-447f-a677-8fd4f9fa5fd2-utilities\") pod \"cace70b8-0b61-447f-a677-8fd4f9fa5fd2\" (UID: \"cace70b8-0b61-447f-a677-8fd4f9fa5fd2\") " Jan 26 13:10:47 crc kubenswrapper[4844]: I0126 13:10:47.214454 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cace70b8-0b61-447f-a677-8fd4f9fa5fd2-catalog-content\") pod \"cace70b8-0b61-447f-a677-8fd4f9fa5fd2\" (UID: \"cace70b8-0b61-447f-a677-8fd4f9fa5fd2\") " Jan 26 13:10:47 crc kubenswrapper[4844]: I0126 13:10:47.217025 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cace70b8-0b61-447f-a677-8fd4f9fa5fd2-utilities" (OuterVolumeSpecName: "utilities") pod "cace70b8-0b61-447f-a677-8fd4f9fa5fd2" (UID: "cace70b8-0b61-447f-a677-8fd4f9fa5fd2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:10:47 crc kubenswrapper[4844]: I0126 13:10:47.217538 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cace70b8-0b61-447f-a677-8fd4f9fa5fd2-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:10:47 crc kubenswrapper[4844]: I0126 13:10:47.230792 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cace70b8-0b61-447f-a677-8fd4f9fa5fd2-kube-api-access-5zrch" (OuterVolumeSpecName: "kube-api-access-5zrch") pod "cace70b8-0b61-447f-a677-8fd4f9fa5fd2" (UID: "cace70b8-0b61-447f-a677-8fd4f9fa5fd2"). InnerVolumeSpecName "kube-api-access-5zrch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:10:47 crc kubenswrapper[4844]: I0126 13:10:47.319423 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zrch\" (UniqueName: \"kubernetes.io/projected/cace70b8-0b61-447f-a677-8fd4f9fa5fd2-kube-api-access-5zrch\") on node \"crc\" DevicePath \"\"" Jan 26 13:10:47 crc kubenswrapper[4844]: I0126 13:10:47.344974 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cace70b8-0b61-447f-a677-8fd4f9fa5fd2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cace70b8-0b61-447f-a677-8fd4f9fa5fd2" (UID: "cace70b8-0b61-447f-a677-8fd4f9fa5fd2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:10:47 crc kubenswrapper[4844]: I0126 13:10:47.420179 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cace70b8-0b61-447f-a677-8fd4f9fa5fd2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:10:47 crc kubenswrapper[4844]: I0126 13:10:47.891219 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l729g" event={"ID":"cace70b8-0b61-447f-a677-8fd4f9fa5fd2","Type":"ContainerDied","Data":"86f96e5bd72584871853b959ca6091576483fa1cdab3a4cdf1d11b6c1dad47d2"} Jan 26 13:10:47 crc kubenswrapper[4844]: I0126 13:10:47.891269 4844 scope.go:117] "RemoveContainer" containerID="f6aea8b4d5cc97fe56ae7bfa2ca4623ce823a8fc6e42f195df62ee127ac7ba19" Jan 26 13:10:47 crc kubenswrapper[4844]: I0126 13:10:47.891356 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l729g" Jan 26 13:10:47 crc kubenswrapper[4844]: I0126 13:10:47.936553 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l729g"] Jan 26 13:10:47 crc kubenswrapper[4844]: I0126 13:10:47.941011 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l729g"] Jan 26 13:10:49 crc kubenswrapper[4844]: I0126 13:10:49.322126 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cace70b8-0b61-447f-a677-8fd4f9fa5fd2" path="/var/lib/kubelet/pods/cace70b8-0b61-447f-a677-8fd4f9fa5fd2/volumes" Jan 26 13:10:52 crc kubenswrapper[4844]: I0126 13:10:52.039469 4844 scope.go:117] "RemoveContainer" containerID="e5005e8e72a069c3498e47926872f6c331a1f23f86dff3d1cf967cadf5e91728" Jan 26 13:10:52 crc kubenswrapper[4844]: I0126 13:10:52.077839 4844 scope.go:117] "RemoveContainer" containerID="b3bd6ddf068e6bd3c82138266ea0e91b6ef9a533c83de7695b1be0dac7f7d471" Jan 26 13:10:52 crc kubenswrapper[4844]: I0126 13:10:52.919320 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-sjw9j" event={"ID":"a9734a40-f918-40da-9931-7d55904a646a","Type":"ContainerStarted","Data":"1973cdf2d189c754fa032b34fb4f30692b8810648e7046774e36c6d646305c4f"} Jan 26 13:10:52 crc kubenswrapper[4844]: I0126 13:10:52.919514 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-sjw9j" Jan 26 13:10:52 crc kubenswrapper[4844]: I0126 13:10:52.921021 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-68hvv" event={"ID":"321b4c21-0d4a-49d5-a14a-9f49e2ea5600","Type":"ContainerStarted","Data":"62037b068088cdd4ef565c395f629c57d5ebdb875ae7e139fce308434b8be27a"} Jan 26 13:10:52 crc kubenswrapper[4844]: I0126 13:10:52.922895 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-dg7zb" event={"ID":"1dec1dad-33cd-4ea8-9f69-9e69e0f56e73","Type":"ContainerStarted","Data":"16225dcec453db38660ee988e11e3dd43dc3f3ed4da519c28158d6e8f1a71300"} Jan 26 13:10:52 crc kubenswrapper[4844]: I0126 13:10:52.924816 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-clgj9" event={"ID":"50efd8fd-16d6-4d82-a9f0-ea82c4d50c4c","Type":"ContainerStarted","Data":"1e10006ce4f50f018a86d61226fa5664d1efac5106f7a7e7c1051d8bf72c7825"} Jan 26 13:10:52 crc kubenswrapper[4844]: I0126 13:10:52.924994 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-clgj9" Jan 26 13:10:52 crc kubenswrapper[4844]: I0126 13:10:52.927557 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-mvsq5" event={"ID":"b2533187-bdf5-44b9-a05d-ceb2e2ea467b","Type":"ContainerStarted","Data":"a660b316e6b32dcbdd5371319d37e52bf966215b66f1897836ad8b8bc68043ca"} Jan 26 13:10:52 crc kubenswrapper[4844]: I0126 13:10:52.942880 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-sjw9j" podStartSLOduration=1.9367033999999999 podStartE2EDuration="14.94286405s" podCreationTimestamp="2026-01-26 13:10:38 +0000 UTC" firstStartedPulling="2026-01-26 13:10:39.083133788 +0000 UTC m=+1616.016501400" lastFinishedPulling="2026-01-26 13:10:52.089294438 +0000 UTC m=+1629.022662050" observedRunningTime="2026-01-26 13:10:52.941145009 +0000 UTC m=+1629.874512661" watchObservedRunningTime="2026-01-26 13:10:52.94286405 +0000 UTC m=+1629.876231662" Jan 26 13:10:52 crc kubenswrapper[4844]: I0126 13:10:52.956199 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-clgj9" Jan 26 13:10:52 crc kubenswrapper[4844]: I0126 13:10:52.969733 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-mvsq5" podStartSLOduration=3.575672481 podStartE2EDuration="15.969709888s" podCreationTimestamp="2026-01-26 13:10:37 +0000 UTC" firstStartedPulling="2026-01-26 13:10:39.695003905 +0000 UTC m=+1616.628371517" lastFinishedPulling="2026-01-26 13:10:52.089041312 +0000 UTC m=+1629.022408924" observedRunningTime="2026-01-26 13:10:52.968415077 +0000 UTC m=+1629.901782699" watchObservedRunningTime="2026-01-26 13:10:52.969709888 +0000 UTC m=+1629.903077510" Jan 26 13:10:52 crc kubenswrapper[4844]: I0126 13:10:52.996899 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b87948799-68hvv" podStartSLOduration=3.532664204 podStartE2EDuration="15.996870672s" podCreationTimestamp="2026-01-26 13:10:37 +0000 UTC" firstStartedPulling="2026-01-26 13:10:39.657748467 +0000 UTC m=+1616.591116079" lastFinishedPulling="2026-01-26 13:10:52.121954935 +0000 UTC m=+1629.055322547" observedRunningTime="2026-01-26 13:10:52.992002474 +0000 UTC m=+1629.925370096" watchObservedRunningTime="2026-01-26 13:10:52.996870672 +0000 UTC m=+1629.930238294" Jan 26 13:10:53 crc kubenswrapper[4844]: I0126 13:10:53.026675 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-clgj9" podStartSLOduration=1.603171561 podStartE2EDuration="15.02665673s" podCreationTimestamp="2026-01-26 13:10:38 +0000 UTC" firstStartedPulling="2026-01-26 13:10:38.71602416 +0000 UTC m=+1615.649391772" lastFinishedPulling="2026-01-26 13:10:52.139509329 +0000 UTC m=+1629.072876941" observedRunningTime="2026-01-26 13:10:53.022036279 +0000 UTC m=+1629.955403891" watchObservedRunningTime="2026-01-26 13:10:53.02665673 +0000 UTC m=+1629.960024342" Jan 26 13:10:53 crc kubenswrapper[4844]: I0126 13:10:53.048723 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-dg7zb" podStartSLOduration=2.587521235 podStartE2EDuration="16.048707422s" podCreationTimestamp="2026-01-26 13:10:37 +0000 UTC" firstStartedPulling="2026-01-26 13:10:38.628857229 +0000 UTC m=+1615.562224841" lastFinishedPulling="2026-01-26 13:10:52.090043416 +0000 UTC m=+1629.023411028" observedRunningTime="2026-01-26 13:10:53.041634001 +0000 UTC m=+1629.975001633" watchObservedRunningTime="2026-01-26 13:10:53.048707422 +0000 UTC m=+1629.982075024" Jan 26 13:10:56 crc kubenswrapper[4844]: I0126 13:10:56.312787 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:10:56 crc kubenswrapper[4844]: E0126 13:10:56.313953 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:10:58 crc kubenswrapper[4844]: I0126 13:10:58.669521 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-sjw9j" Jan 26 13:11:08 crc kubenswrapper[4844]: I0126 13:11:08.313822 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:11:08 crc kubenswrapper[4844]: E0126 13:11:08.314915 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.078504 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2"] Jan 26 13:11:18 crc kubenswrapper[4844]: E0126 13:11:18.079733 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cace70b8-0b61-447f-a677-8fd4f9fa5fd2" containerName="extract-content" Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.079751 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="cace70b8-0b61-447f-a677-8fd4f9fa5fd2" containerName="extract-content" Jan 26 13:11:18 crc kubenswrapper[4844]: E0126 13:11:18.079772 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cace70b8-0b61-447f-a677-8fd4f9fa5fd2" containerName="extract-utilities" Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.079778 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="cace70b8-0b61-447f-a677-8fd4f9fa5fd2" containerName="extract-utilities" Jan 26 13:11:18 crc kubenswrapper[4844]: E0126 13:11:18.079790 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cace70b8-0b61-447f-a677-8fd4f9fa5fd2" containerName="registry-server" Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.079796 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="cace70b8-0b61-447f-a677-8fd4f9fa5fd2" containerName="registry-server" Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.079905 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="cace70b8-0b61-447f-a677-8fd4f9fa5fd2" containerName="registry-server" Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.080893 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2" Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.083541 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.096040 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2"] Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.144336 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2b5f908-45d0-4977-93ce-6e5842a166cc-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2\" (UID: \"b2b5f908-45d0-4977-93ce-6e5842a166cc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2" Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.144401 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2b5f908-45d0-4977-93ce-6e5842a166cc-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2\" (UID: \"b2b5f908-45d0-4977-93ce-6e5842a166cc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2" Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.144465 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lsh5\" (UniqueName: \"kubernetes.io/projected/b2b5f908-45d0-4977-93ce-6e5842a166cc-kube-api-access-9lsh5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2\" (UID: \"b2b5f908-45d0-4977-93ce-6e5842a166cc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2" Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.245944 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lsh5\" (UniqueName: \"kubernetes.io/projected/b2b5f908-45d0-4977-93ce-6e5842a166cc-kube-api-access-9lsh5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2\" (UID: \"b2b5f908-45d0-4977-93ce-6e5842a166cc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2" Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.246057 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2b5f908-45d0-4977-93ce-6e5842a166cc-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2\" (UID: \"b2b5f908-45d0-4977-93ce-6e5842a166cc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2" Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.246083 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2b5f908-45d0-4977-93ce-6e5842a166cc-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2\" (UID: \"b2b5f908-45d0-4977-93ce-6e5842a166cc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2" Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.246501 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2b5f908-45d0-4977-93ce-6e5842a166cc-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2\" (UID: \"b2b5f908-45d0-4977-93ce-6e5842a166cc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2" Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.246735 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2b5f908-45d0-4977-93ce-6e5842a166cc-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2\" (UID: \"b2b5f908-45d0-4977-93ce-6e5842a166cc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2" Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.274368 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lsh5\" (UniqueName: \"kubernetes.io/projected/b2b5f908-45d0-4977-93ce-6e5842a166cc-kube-api-access-9lsh5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2\" (UID: \"b2b5f908-45d0-4977-93ce-6e5842a166cc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2" Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.402392 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2" Jan 26 13:11:18 crc kubenswrapper[4844]: I0126 13:11:18.746874 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2"] Jan 26 13:11:19 crc kubenswrapper[4844]: I0126 13:11:19.083191 4844 generic.go:334] "Generic (PLEG): container finished" podID="b2b5f908-45d0-4977-93ce-6e5842a166cc" containerID="72f1a6b7d63733bc000c4c38f4799dd75c42b56f927ec1d3844446147345113d" exitCode=0 Jan 26 13:11:19 crc kubenswrapper[4844]: I0126 13:11:19.083231 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2" event={"ID":"b2b5f908-45d0-4977-93ce-6e5842a166cc","Type":"ContainerDied","Data":"72f1a6b7d63733bc000c4c38f4799dd75c42b56f927ec1d3844446147345113d"} Jan 26 13:11:19 crc kubenswrapper[4844]: I0126 13:11:19.083287 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2" event={"ID":"b2b5f908-45d0-4977-93ce-6e5842a166cc","Type":"ContainerStarted","Data":"9c4b88098e69041a91806c2081479b46436cced75cc85c59fbb2509c0816ccaa"} Jan 26 13:11:23 crc kubenswrapper[4844]: I0126 13:11:23.316323 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:11:23 crc kubenswrapper[4844]: E0126 13:11:23.317929 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:11:24 crc kubenswrapper[4844]: I0126 13:11:24.141001 4844 generic.go:334] "Generic (PLEG): container finished" podID="b2b5f908-45d0-4977-93ce-6e5842a166cc" containerID="8204b26499bebf05fefdc2c50f1d60291f96910473abec0500ca411000b2d28e" exitCode=0 Jan 26 13:11:24 crc kubenswrapper[4844]: I0126 13:11:24.141110 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2" event={"ID":"b2b5f908-45d0-4977-93ce-6e5842a166cc","Type":"ContainerDied","Data":"8204b26499bebf05fefdc2c50f1d60291f96910473abec0500ca411000b2d28e"} Jan 26 13:11:25 crc kubenswrapper[4844]: I0126 13:11:25.156283 4844 generic.go:334] "Generic (PLEG): container finished" podID="b2b5f908-45d0-4977-93ce-6e5842a166cc" containerID="f230d32acc19fc0ea92700dc0368e7282f69b3ba976733dfc355c8c74c98cd41" exitCode=0 Jan 26 13:11:25 crc kubenswrapper[4844]: I0126 13:11:25.156319 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2" event={"ID":"b2b5f908-45d0-4977-93ce-6e5842a166cc","Type":"ContainerDied","Data":"f230d32acc19fc0ea92700dc0368e7282f69b3ba976733dfc355c8c74c98cd41"} Jan 26 13:11:26 crc kubenswrapper[4844]: I0126 13:11:26.417475 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2" Jan 26 13:11:26 crc kubenswrapper[4844]: I0126 13:11:26.460842 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2b5f908-45d0-4977-93ce-6e5842a166cc-util\") pod \"b2b5f908-45d0-4977-93ce-6e5842a166cc\" (UID: \"b2b5f908-45d0-4977-93ce-6e5842a166cc\") " Jan 26 13:11:26 crc kubenswrapper[4844]: I0126 13:11:26.460910 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lsh5\" (UniqueName: \"kubernetes.io/projected/b2b5f908-45d0-4977-93ce-6e5842a166cc-kube-api-access-9lsh5\") pod \"b2b5f908-45d0-4977-93ce-6e5842a166cc\" (UID: \"b2b5f908-45d0-4977-93ce-6e5842a166cc\") " Jan 26 13:11:26 crc kubenswrapper[4844]: I0126 13:11:26.460956 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2b5f908-45d0-4977-93ce-6e5842a166cc-bundle\") pod \"b2b5f908-45d0-4977-93ce-6e5842a166cc\" (UID: \"b2b5f908-45d0-4977-93ce-6e5842a166cc\") " Jan 26 13:11:26 crc kubenswrapper[4844]: I0126 13:11:26.462090 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2b5f908-45d0-4977-93ce-6e5842a166cc-bundle" (OuterVolumeSpecName: "bundle") pod "b2b5f908-45d0-4977-93ce-6e5842a166cc" (UID: "b2b5f908-45d0-4977-93ce-6e5842a166cc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:11:26 crc kubenswrapper[4844]: I0126 13:11:26.468161 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2b5f908-45d0-4977-93ce-6e5842a166cc-kube-api-access-9lsh5" (OuterVolumeSpecName: "kube-api-access-9lsh5") pod "b2b5f908-45d0-4977-93ce-6e5842a166cc" (UID: "b2b5f908-45d0-4977-93ce-6e5842a166cc"). InnerVolumeSpecName "kube-api-access-9lsh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:11:26 crc kubenswrapper[4844]: I0126 13:11:26.482073 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2b5f908-45d0-4977-93ce-6e5842a166cc-util" (OuterVolumeSpecName: "util") pod "b2b5f908-45d0-4977-93ce-6e5842a166cc" (UID: "b2b5f908-45d0-4977-93ce-6e5842a166cc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:11:26 crc kubenswrapper[4844]: I0126 13:11:26.562029 4844 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2b5f908-45d0-4977-93ce-6e5842a166cc-util\") on node \"crc\" DevicePath \"\"" Jan 26 13:11:26 crc kubenswrapper[4844]: I0126 13:11:26.562075 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lsh5\" (UniqueName: \"kubernetes.io/projected/b2b5f908-45d0-4977-93ce-6e5842a166cc-kube-api-access-9lsh5\") on node \"crc\" DevicePath \"\"" Jan 26 13:11:26 crc kubenswrapper[4844]: I0126 13:11:26.562101 4844 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2b5f908-45d0-4977-93ce-6e5842a166cc-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:11:27 crc kubenswrapper[4844]: I0126 13:11:27.174792 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2" event={"ID":"b2b5f908-45d0-4977-93ce-6e5842a166cc","Type":"ContainerDied","Data":"9c4b88098e69041a91806c2081479b46436cced75cc85c59fbb2509c0816ccaa"} Jan 26 13:11:27 crc kubenswrapper[4844]: I0126 13:11:27.174852 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c4b88098e69041a91806c2081479b46436cced75cc85c59fbb2509c0816ccaa" Jan 26 13:11:27 crc kubenswrapper[4844]: I0126 13:11:27.174890 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2" Jan 26 13:11:29 crc kubenswrapper[4844]: I0126 13:11:29.686436 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-9djrz"] Jan 26 13:11:29 crc kubenswrapper[4844]: E0126 13:11:29.686957 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2b5f908-45d0-4977-93ce-6e5842a166cc" containerName="extract" Jan 26 13:11:29 crc kubenswrapper[4844]: I0126 13:11:29.686968 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2b5f908-45d0-4977-93ce-6e5842a166cc" containerName="extract" Jan 26 13:11:29 crc kubenswrapper[4844]: E0126 13:11:29.686978 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2b5f908-45d0-4977-93ce-6e5842a166cc" containerName="util" Jan 26 13:11:29 crc kubenswrapper[4844]: I0126 13:11:29.686985 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2b5f908-45d0-4977-93ce-6e5842a166cc" containerName="util" Jan 26 13:11:29 crc kubenswrapper[4844]: E0126 13:11:29.687005 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2b5f908-45d0-4977-93ce-6e5842a166cc" containerName="pull" Jan 26 13:11:29 crc kubenswrapper[4844]: I0126 13:11:29.687012 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2b5f908-45d0-4977-93ce-6e5842a166cc" containerName="pull" Jan 26 13:11:29 crc kubenswrapper[4844]: I0126 13:11:29.687141 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2b5f908-45d0-4977-93ce-6e5842a166cc" containerName="extract" Jan 26 13:11:29 crc kubenswrapper[4844]: I0126 13:11:29.687544 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-9djrz" Jan 26 13:11:29 crc kubenswrapper[4844]: I0126 13:11:29.690101 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-rhzwk" Jan 26 13:11:29 crc kubenswrapper[4844]: I0126 13:11:29.690115 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 26 13:11:29 crc kubenswrapper[4844]: I0126 13:11:29.694360 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 26 13:11:29 crc kubenswrapper[4844]: I0126 13:11:29.695452 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-9djrz"] Jan 26 13:11:29 crc kubenswrapper[4844]: I0126 13:11:29.705562 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs7q2\" (UniqueName: \"kubernetes.io/projected/0c0a3ca8-870a-4c95-a1a0-002e4cdb3bb8-kube-api-access-xs7q2\") pod \"nmstate-operator-646758c888-9djrz\" (UID: \"0c0a3ca8-870a-4c95-a1a0-002e4cdb3bb8\") " pod="openshift-nmstate/nmstate-operator-646758c888-9djrz" Jan 26 13:11:29 crc kubenswrapper[4844]: I0126 13:11:29.806451 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs7q2\" (UniqueName: \"kubernetes.io/projected/0c0a3ca8-870a-4c95-a1a0-002e4cdb3bb8-kube-api-access-xs7q2\") pod \"nmstate-operator-646758c888-9djrz\" (UID: \"0c0a3ca8-870a-4c95-a1a0-002e4cdb3bb8\") " pod="openshift-nmstate/nmstate-operator-646758c888-9djrz" Jan 26 13:11:29 crc kubenswrapper[4844]: I0126 13:11:29.828840 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs7q2\" (UniqueName: \"kubernetes.io/projected/0c0a3ca8-870a-4c95-a1a0-002e4cdb3bb8-kube-api-access-xs7q2\") pod \"nmstate-operator-646758c888-9djrz\" (UID: \"0c0a3ca8-870a-4c95-a1a0-002e4cdb3bb8\") " pod="openshift-nmstate/nmstate-operator-646758c888-9djrz" Jan 26 13:11:30 crc kubenswrapper[4844]: I0126 13:11:30.006439 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-9djrz" Jan 26 13:11:30 crc kubenswrapper[4844]: I0126 13:11:30.264366 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-9djrz"] Jan 26 13:11:31 crc kubenswrapper[4844]: I0126 13:11:31.207092 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-9djrz" event={"ID":"0c0a3ca8-870a-4c95-a1a0-002e4cdb3bb8","Type":"ContainerStarted","Data":"d41a696a610ea5abf9942443e6088f6395355ee2d65f9335d91d9edbc54d9959"} Jan 26 13:11:33 crc kubenswrapper[4844]: I0126 13:11:33.221405 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-9djrz" event={"ID":"0c0a3ca8-870a-4c95-a1a0-002e4cdb3bb8","Type":"ContainerStarted","Data":"3622a34e823f390cb936738fcd2b4bbc0f8e2948923c53fc6d0bf9d197e2b29b"} Jan 26 13:11:33 crc kubenswrapper[4844]: I0126 13:11:33.242124 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-9djrz" podStartSLOduration=2.216119226 podStartE2EDuration="4.242105336s" podCreationTimestamp="2026-01-26 13:11:29 +0000 UTC" firstStartedPulling="2026-01-26 13:11:30.282950545 +0000 UTC m=+1667.216318157" lastFinishedPulling="2026-01-26 13:11:32.308936665 +0000 UTC m=+1669.242304267" observedRunningTime="2026-01-26 13:11:33.237451493 +0000 UTC m=+1670.170819145" watchObservedRunningTime="2026-01-26 13:11:33.242105336 +0000 UTC m=+1670.175472948" Jan 26 13:11:34 crc kubenswrapper[4844]: I0126 13:11:34.313947 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:11:34 crc kubenswrapper[4844]: E0126 13:11:34.314265 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.352779 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vgnf8"] Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.363267 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-blwvj"] Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.363404 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-vgnf8" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.363980 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blwvj" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.369475 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-gjdnw" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.371312 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.371899 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vgnf8"] Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.388995 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-blwvj"] Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.398286 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-2d462"] Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.399081 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2d462" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.512358 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-qdxvv"] Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.513719 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qdxvv" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.527784 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/68790915-1674-4d77-8d03-d21698da101e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-blwvj\" (UID: \"68790915-1674-4d77-8d03-d21698da101e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blwvj" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.527869 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kst8t\" (UniqueName: \"kubernetes.io/projected/9baf25b3-6096-4215-9455-b9126c02ffcf-kube-api-access-kst8t\") pod \"nmstate-handler-2d462\" (UID: \"9baf25b3-6096-4215-9455-b9126c02ffcf\") " pod="openshift-nmstate/nmstate-handler-2d462" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.527919 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfbdl\" (UniqueName: \"kubernetes.io/projected/68790915-1674-4d77-8d03-d21698da101e-kube-api-access-xfbdl\") pod \"nmstate-webhook-8474b5b9d8-blwvj\" (UID: \"68790915-1674-4d77-8d03-d21698da101e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blwvj" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.528537 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqvpx\" (UniqueName: \"kubernetes.io/projected/bcef572e-5718-4586-b0e3-907551cdf0ff-kube-api-access-hqvpx\") pod \"nmstate-metrics-54757c584b-vgnf8\" (UID: \"bcef572e-5718-4586-b0e3-907551cdf0ff\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vgnf8" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.528734 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9baf25b3-6096-4215-9455-b9126c02ffcf-nmstate-lock\") pod \"nmstate-handler-2d462\" (UID: \"9baf25b3-6096-4215-9455-b9126c02ffcf\") " pod="openshift-nmstate/nmstate-handler-2d462" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.528799 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9baf25b3-6096-4215-9455-b9126c02ffcf-dbus-socket\") pod \"nmstate-handler-2d462\" (UID: \"9baf25b3-6096-4215-9455-b9126c02ffcf\") " pod="openshift-nmstate/nmstate-handler-2d462" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.528860 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9baf25b3-6096-4215-9455-b9126c02ffcf-ovs-socket\") pod \"nmstate-handler-2d462\" (UID: \"9baf25b3-6096-4215-9455-b9126c02ffcf\") " pod="openshift-nmstate/nmstate-handler-2d462" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.529293 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.529584 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.529656 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zwm2s" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.552800 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-qdxvv"] Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.630331 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgpts\" (UniqueName: \"kubernetes.io/projected/213e48c5-2b34-4d8a-af54-773da9caddb5-kube-api-access-bgpts\") pod \"nmstate-console-plugin-7754f76f8b-qdxvv\" (UID: \"213e48c5-2b34-4d8a-af54-773da9caddb5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qdxvv" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.630391 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kst8t\" (UniqueName: \"kubernetes.io/projected/9baf25b3-6096-4215-9455-b9126c02ffcf-kube-api-access-kst8t\") pod \"nmstate-handler-2d462\" (UID: \"9baf25b3-6096-4215-9455-b9126c02ffcf\") " pod="openshift-nmstate/nmstate-handler-2d462" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.630429 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfbdl\" (UniqueName: \"kubernetes.io/projected/68790915-1674-4d77-8d03-d21698da101e-kube-api-access-xfbdl\") pod \"nmstate-webhook-8474b5b9d8-blwvj\" (UID: \"68790915-1674-4d77-8d03-d21698da101e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blwvj" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.630457 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqvpx\" (UniqueName: \"kubernetes.io/projected/bcef572e-5718-4586-b0e3-907551cdf0ff-kube-api-access-hqvpx\") pod \"nmstate-metrics-54757c584b-vgnf8\" (UID: \"bcef572e-5718-4586-b0e3-907551cdf0ff\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vgnf8" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.630485 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9baf25b3-6096-4215-9455-b9126c02ffcf-nmstate-lock\") pod \"nmstate-handler-2d462\" (UID: \"9baf25b3-6096-4215-9455-b9126c02ffcf\") " pod="openshift-nmstate/nmstate-handler-2d462" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.630506 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9baf25b3-6096-4215-9455-b9126c02ffcf-dbus-socket\") pod \"nmstate-handler-2d462\" (UID: \"9baf25b3-6096-4215-9455-b9126c02ffcf\") " pod="openshift-nmstate/nmstate-handler-2d462" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.630535 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/213e48c5-2b34-4d8a-af54-773da9caddb5-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-qdxvv\" (UID: \"213e48c5-2b34-4d8a-af54-773da9caddb5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qdxvv" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.630560 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9baf25b3-6096-4215-9455-b9126c02ffcf-ovs-socket\") pod \"nmstate-handler-2d462\" (UID: \"9baf25b3-6096-4215-9455-b9126c02ffcf\") " pod="openshift-nmstate/nmstate-handler-2d462" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.630620 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/68790915-1674-4d77-8d03-d21698da101e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-blwvj\" (UID: \"68790915-1674-4d77-8d03-d21698da101e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blwvj" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.630656 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/213e48c5-2b34-4d8a-af54-773da9caddb5-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-qdxvv\" (UID: \"213e48c5-2b34-4d8a-af54-773da9caddb5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qdxvv" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.630909 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9baf25b3-6096-4215-9455-b9126c02ffcf-nmstate-lock\") pod \"nmstate-handler-2d462\" (UID: \"9baf25b3-6096-4215-9455-b9126c02ffcf\") " pod="openshift-nmstate/nmstate-handler-2d462" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.630958 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9baf25b3-6096-4215-9455-b9126c02ffcf-ovs-socket\") pod \"nmstate-handler-2d462\" (UID: \"9baf25b3-6096-4215-9455-b9126c02ffcf\") " pod="openshift-nmstate/nmstate-handler-2d462" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.631235 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9baf25b3-6096-4215-9455-b9126c02ffcf-dbus-socket\") pod \"nmstate-handler-2d462\" (UID: \"9baf25b3-6096-4215-9455-b9126c02ffcf\") " pod="openshift-nmstate/nmstate-handler-2d462" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.647330 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/68790915-1674-4d77-8d03-d21698da101e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-blwvj\" (UID: \"68790915-1674-4d77-8d03-d21698da101e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blwvj" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.649194 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqvpx\" (UniqueName: \"kubernetes.io/projected/bcef572e-5718-4586-b0e3-907551cdf0ff-kube-api-access-hqvpx\") pod \"nmstate-metrics-54757c584b-vgnf8\" (UID: \"bcef572e-5718-4586-b0e3-907551cdf0ff\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vgnf8" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.654103 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kst8t\" (UniqueName: \"kubernetes.io/projected/9baf25b3-6096-4215-9455-b9126c02ffcf-kube-api-access-kst8t\") pod \"nmstate-handler-2d462\" (UID: \"9baf25b3-6096-4215-9455-b9126c02ffcf\") " pod="openshift-nmstate/nmstate-handler-2d462" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.654509 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfbdl\" (UniqueName: \"kubernetes.io/projected/68790915-1674-4d77-8d03-d21698da101e-kube-api-access-xfbdl\") pod \"nmstate-webhook-8474b5b9d8-blwvj\" (UID: \"68790915-1674-4d77-8d03-d21698da101e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blwvj" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.682393 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-vgnf8" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.692657 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-65cc8f54b6-kkzh6"] Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.693462 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.696360 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blwvj" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.710026 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65cc8f54b6-kkzh6"] Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.715184 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2d462" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.731766 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/213e48c5-2b34-4d8a-af54-773da9caddb5-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-qdxvv\" (UID: \"213e48c5-2b34-4d8a-af54-773da9caddb5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qdxvv" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.731813 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/49d06406-2beb-403f-8865-335fd73b5835-console-config\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.731840 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49d06406-2beb-403f-8865-335fd73b5835-trusted-ca-bundle\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.731863 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fv9h\" (UniqueName: \"kubernetes.io/projected/49d06406-2beb-403f-8865-335fd73b5835-kube-api-access-2fv9h\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.731898 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/213e48c5-2b34-4d8a-af54-773da9caddb5-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-qdxvv\" (UID: \"213e48c5-2b34-4d8a-af54-773da9caddb5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qdxvv" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.731945 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/49d06406-2beb-403f-8865-335fd73b5835-service-ca\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: E0126 13:11:39.732026 4844 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 26 13:11:39 crc kubenswrapper[4844]: E0126 13:11:39.732079 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/213e48c5-2b34-4d8a-af54-773da9caddb5-plugin-serving-cert podName:213e48c5-2b34-4d8a-af54-773da9caddb5 nodeName:}" failed. No retries permitted until 2026-01-26 13:11:40.232059545 +0000 UTC m=+1677.165427157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/213e48c5-2b34-4d8a-af54-773da9caddb5-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-qdxvv" (UID: "213e48c5-2b34-4d8a-af54-773da9caddb5") : secret "plugin-serving-cert" not found Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.732347 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/49d06406-2beb-403f-8865-335fd73b5835-console-serving-cert\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.732386 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgpts\" (UniqueName: \"kubernetes.io/projected/213e48c5-2b34-4d8a-af54-773da9caddb5-kube-api-access-bgpts\") pod \"nmstate-console-plugin-7754f76f8b-qdxvv\" (UID: \"213e48c5-2b34-4d8a-af54-773da9caddb5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qdxvv" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.732425 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/49d06406-2beb-403f-8865-335fd73b5835-console-oauth-config\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.732555 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/49d06406-2beb-403f-8865-335fd73b5835-oauth-serving-cert\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.732760 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/213e48c5-2b34-4d8a-af54-773da9caddb5-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-qdxvv\" (UID: \"213e48c5-2b34-4d8a-af54-773da9caddb5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qdxvv" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.749978 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgpts\" (UniqueName: \"kubernetes.io/projected/213e48c5-2b34-4d8a-af54-773da9caddb5-kube-api-access-bgpts\") pod \"nmstate-console-plugin-7754f76f8b-qdxvv\" (UID: \"213e48c5-2b34-4d8a-af54-773da9caddb5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qdxvv" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.835229 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49d06406-2beb-403f-8865-335fd73b5835-trusted-ca-bundle\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.835279 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fv9h\" (UniqueName: \"kubernetes.io/projected/49d06406-2beb-403f-8865-335fd73b5835-kube-api-access-2fv9h\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.835307 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/49d06406-2beb-403f-8865-335fd73b5835-service-ca\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.835327 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/49d06406-2beb-403f-8865-335fd73b5835-console-serving-cert\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.835357 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/49d06406-2beb-403f-8865-335fd73b5835-console-oauth-config\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.835393 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/49d06406-2beb-403f-8865-335fd73b5835-oauth-serving-cert\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.835425 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/49d06406-2beb-403f-8865-335fd73b5835-console-config\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.836493 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/49d06406-2beb-403f-8865-335fd73b5835-console-config\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.840636 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/49d06406-2beb-403f-8865-335fd73b5835-service-ca\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.841465 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/49d06406-2beb-403f-8865-335fd73b5835-console-serving-cert\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.842498 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/49d06406-2beb-403f-8865-335fd73b5835-oauth-serving-cert\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.843366 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49d06406-2beb-403f-8865-335fd73b5835-trusted-ca-bundle\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.844293 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/49d06406-2beb-403f-8865-335fd73b5835-console-oauth-config\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.869361 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fv9h\" (UniqueName: \"kubernetes.io/projected/49d06406-2beb-403f-8865-335fd73b5835-kube-api-access-2fv9h\") pod \"console-65cc8f54b6-kkzh6\" (UID: \"49d06406-2beb-403f-8865-335fd73b5835\") " pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.937087 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vgnf8"] Jan 26 13:11:39 crc kubenswrapper[4844]: I0126 13:11:39.996257 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-blwvj"] Jan 26 13:11:40 crc kubenswrapper[4844]: I0126 13:11:40.061655 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:40 crc kubenswrapper[4844]: I0126 13:11:40.241751 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/213e48c5-2b34-4d8a-af54-773da9caddb5-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-qdxvv\" (UID: \"213e48c5-2b34-4d8a-af54-773da9caddb5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qdxvv" Jan 26 13:11:40 crc kubenswrapper[4844]: I0126 13:11:40.248062 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/213e48c5-2b34-4d8a-af54-773da9caddb5-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-qdxvv\" (UID: \"213e48c5-2b34-4d8a-af54-773da9caddb5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qdxvv" Jan 26 13:11:40 crc kubenswrapper[4844]: I0126 13:11:40.284432 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2d462" event={"ID":"9baf25b3-6096-4215-9455-b9126c02ffcf","Type":"ContainerStarted","Data":"3361b37c172cdb4e9b3cf5dc380ac1b04674335cff73494a6f4b2648fbcd8b19"} Jan 26 13:11:40 crc kubenswrapper[4844]: I0126 13:11:40.285879 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blwvj" event={"ID":"68790915-1674-4d77-8d03-d21698da101e","Type":"ContainerStarted","Data":"8f7154971b818878a432c35b055367d58af64ba88b0a97625539763a4912ff2e"} Jan 26 13:11:40 crc kubenswrapper[4844]: I0126 13:11:40.286786 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vgnf8" event={"ID":"bcef572e-5718-4586-b0e3-907551cdf0ff","Type":"ContainerStarted","Data":"6b6896b7ff41e93d402c71b62ce9df17c53356c9dafb1a1f9cda4f454a936fd0"} Jan 26 13:11:40 crc kubenswrapper[4844]: I0126 13:11:40.458400 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qdxvv" Jan 26 13:11:40 crc kubenswrapper[4844]: I0126 13:11:40.557694 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65cc8f54b6-kkzh6"] Jan 26 13:11:40 crc kubenswrapper[4844]: W0126 13:11:40.575273 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49d06406_2beb_403f_8865_335fd73b5835.slice/crio-e0eb83944a201b53aec87577e8d717840a6d65b9b5c0764340f60587e03dce6b WatchSource:0}: Error finding container e0eb83944a201b53aec87577e8d717840a6d65b9b5c0764340f60587e03dce6b: Status 404 returned error can't find the container with id e0eb83944a201b53aec87577e8d717840a6d65b9b5c0764340f60587e03dce6b Jan 26 13:11:40 crc kubenswrapper[4844]: I0126 13:11:40.720918 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-qdxvv"] Jan 26 13:11:41 crc kubenswrapper[4844]: I0126 13:11:41.292397 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qdxvv" event={"ID":"213e48c5-2b34-4d8a-af54-773da9caddb5","Type":"ContainerStarted","Data":"2d0d922db68f6c192611496addd4bcb1c7006efb682fca9f594ad68091a9eb1d"} Jan 26 13:11:41 crc kubenswrapper[4844]: I0126 13:11:41.293807 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65cc8f54b6-kkzh6" event={"ID":"49d06406-2beb-403f-8865-335fd73b5835","Type":"ContainerStarted","Data":"d7cd6733ff40cf02f2ed8d0942a5d0c04b7bc919aeb17c293563aca68d66829d"} Jan 26 13:11:41 crc kubenswrapper[4844]: I0126 13:11:41.293832 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65cc8f54b6-kkzh6" event={"ID":"49d06406-2beb-403f-8865-335fd73b5835","Type":"ContainerStarted","Data":"e0eb83944a201b53aec87577e8d717840a6d65b9b5c0764340f60587e03dce6b"} Jan 26 13:11:41 crc kubenswrapper[4844]: I0126 13:11:41.311211 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-65cc8f54b6-kkzh6" podStartSLOduration=2.311195624 podStartE2EDuration="2.311195624s" podCreationTimestamp="2026-01-26 13:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:11:41.310574679 +0000 UTC m=+1678.243942301" watchObservedRunningTime="2026-01-26 13:11:41.311195624 +0000 UTC m=+1678.244563236" Jan 26 13:11:43 crc kubenswrapper[4844]: I0126 13:11:43.330992 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blwvj" Jan 26 13:11:43 crc kubenswrapper[4844]: I0126 13:11:43.331570 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-2d462" Jan 26 13:11:43 crc kubenswrapper[4844]: I0126 13:11:43.331585 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blwvj" event={"ID":"68790915-1674-4d77-8d03-d21698da101e","Type":"ContainerStarted","Data":"80925f80f9c7e557b19c94cc4b2f966f050f5fcf4d58704158e1625ec79c42fa"} Jan 26 13:11:43 crc kubenswrapper[4844]: I0126 13:11:43.331623 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vgnf8" event={"ID":"bcef572e-5718-4586-b0e3-907551cdf0ff","Type":"ContainerStarted","Data":"93112f14f7da70d42989e8eb6dcef1beb29ee7f3eb17afe5ec2528953560f741"} Jan 26 13:11:43 crc kubenswrapper[4844]: I0126 13:11:43.331636 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2d462" event={"ID":"9baf25b3-6096-4215-9455-b9126c02ffcf","Type":"ContainerStarted","Data":"60c2318b217f5d58650630c567f327bbcbb574b01f314690fc2f5af7991e04a1"} Jan 26 13:11:43 crc kubenswrapper[4844]: I0126 13:11:43.401183 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-2d462" podStartSLOduration=1.673021463 podStartE2EDuration="4.401163685s" podCreationTimestamp="2026-01-26 13:11:39 +0000 UTC" firstStartedPulling="2026-01-26 13:11:39.765740906 +0000 UTC m=+1676.699108518" lastFinishedPulling="2026-01-26 13:11:42.493883128 +0000 UTC m=+1679.427250740" observedRunningTime="2026-01-26 13:11:43.391887722 +0000 UTC m=+1680.325255344" watchObservedRunningTime="2026-01-26 13:11:43.401163685 +0000 UTC m=+1680.334531307" Jan 26 13:11:43 crc kubenswrapper[4844]: I0126 13:11:43.406386 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blwvj" podStartSLOduration=1.907848032 podStartE2EDuration="4.40634884s" podCreationTimestamp="2026-01-26 13:11:39 +0000 UTC" firstStartedPulling="2026-01-26 13:11:40.001321524 +0000 UTC m=+1676.934689146" lastFinishedPulling="2026-01-26 13:11:42.499822342 +0000 UTC m=+1679.433189954" observedRunningTime="2026-01-26 13:11:43.404962177 +0000 UTC m=+1680.338329789" watchObservedRunningTime="2026-01-26 13:11:43.40634884 +0000 UTC m=+1680.339716452" Jan 26 13:11:44 crc kubenswrapper[4844]: I0126 13:11:44.343674 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qdxvv" event={"ID":"213e48c5-2b34-4d8a-af54-773da9caddb5","Type":"ContainerStarted","Data":"323570668b167a31a3398b2dbde4c2f7c81d1d46860d747936442282b049d538"} Jan 26 13:11:44 crc kubenswrapper[4844]: I0126 13:11:44.361461 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qdxvv" podStartSLOduration=2.237095768 podStartE2EDuration="5.361442069s" podCreationTimestamp="2026-01-26 13:11:39 +0000 UTC" firstStartedPulling="2026-01-26 13:11:40.727306862 +0000 UTC m=+1677.660674474" lastFinishedPulling="2026-01-26 13:11:43.851653163 +0000 UTC m=+1680.785020775" observedRunningTime="2026-01-26 13:11:44.360939138 +0000 UTC m=+1681.294306890" watchObservedRunningTime="2026-01-26 13:11:44.361442069 +0000 UTC m=+1681.294809681" Jan 26 13:11:45 crc kubenswrapper[4844]: I0126 13:11:45.356649 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vgnf8" event={"ID":"bcef572e-5718-4586-b0e3-907551cdf0ff","Type":"ContainerStarted","Data":"94f1a77a4e57fe00cac632c3c382c5cc9eef83dc4822d805e46d7a16bd8dba9c"} Jan 26 13:11:45 crc kubenswrapper[4844]: I0126 13:11:45.376200 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-vgnf8" podStartSLOduration=1.154953506 podStartE2EDuration="6.376178256s" podCreationTimestamp="2026-01-26 13:11:39 +0000 UTC" firstStartedPulling="2026-01-26 13:11:39.949917305 +0000 UTC m=+1676.883284917" lastFinishedPulling="2026-01-26 13:11:45.171142045 +0000 UTC m=+1682.104509667" observedRunningTime="2026-01-26 13:11:45.371064603 +0000 UTC m=+1682.304432255" watchObservedRunningTime="2026-01-26 13:11:45.376178256 +0000 UTC m=+1682.309545868" Jan 26 13:11:46 crc kubenswrapper[4844]: I0126 13:11:46.314139 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:11:46 crc kubenswrapper[4844]: E0126 13:11:46.314420 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:11:49 crc kubenswrapper[4844]: I0126 13:11:49.751508 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-2d462" Jan 26 13:11:50 crc kubenswrapper[4844]: I0126 13:11:50.062750 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:50 crc kubenswrapper[4844]: I0126 13:11:50.062826 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:50 crc kubenswrapper[4844]: I0126 13:11:50.067898 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:50 crc kubenswrapper[4844]: I0126 13:11:50.398192 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-65cc8f54b6-kkzh6" Jan 26 13:11:50 crc kubenswrapper[4844]: I0126 13:11:50.462405 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vhsn2"] Jan 26 13:11:59 crc kubenswrapper[4844]: I0126 13:11:59.703266 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-blwvj" Jan 26 13:12:00 crc kubenswrapper[4844]: I0126 13:12:00.315347 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:12:00 crc kubenswrapper[4844]: E0126 13:12:00.315857 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:12:13 crc kubenswrapper[4844]: I0126 13:12:13.316760 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:12:13 crc kubenswrapper[4844]: E0126 13:12:13.317888 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.508293 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-vhsn2" podUID="8269d7d3-678d-44d5-885e-c5716e8024d8" containerName="console" containerID="cri-o://8f48e391126a27fc17f87108e0926de0cadaeafebd85ae862b34a557400870de" gracePeriod=15 Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.682865 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j"] Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.684203 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.686276 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.691243 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j"] Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.765889 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a04410f5-0ebb-4519-9806-a0210b9fdfdc-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j\" (UID: \"a04410f5-0ebb-4519-9806-a0210b9fdfdc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.766230 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shwm8\" (UniqueName: \"kubernetes.io/projected/a04410f5-0ebb-4519-9806-a0210b9fdfdc-kube-api-access-shwm8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j\" (UID: \"a04410f5-0ebb-4519-9806-a0210b9fdfdc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.766271 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a04410f5-0ebb-4519-9806-a0210b9fdfdc-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j\" (UID: \"a04410f5-0ebb-4519-9806-a0210b9fdfdc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.867661 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a04410f5-0ebb-4519-9806-a0210b9fdfdc-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j\" (UID: \"a04410f5-0ebb-4519-9806-a0210b9fdfdc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.867735 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shwm8\" (UniqueName: \"kubernetes.io/projected/a04410f5-0ebb-4519-9806-a0210b9fdfdc-kube-api-access-shwm8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j\" (UID: \"a04410f5-0ebb-4519-9806-a0210b9fdfdc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.867772 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a04410f5-0ebb-4519-9806-a0210b9fdfdc-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j\" (UID: \"a04410f5-0ebb-4519-9806-a0210b9fdfdc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.868326 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a04410f5-0ebb-4519-9806-a0210b9fdfdc-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j\" (UID: \"a04410f5-0ebb-4519-9806-a0210b9fdfdc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.868378 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a04410f5-0ebb-4519-9806-a0210b9fdfdc-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j\" (UID: \"a04410f5-0ebb-4519-9806-a0210b9fdfdc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.886640 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shwm8\" (UniqueName: \"kubernetes.io/projected/a04410f5-0ebb-4519-9806-a0210b9fdfdc-kube-api-access-shwm8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j\" (UID: \"a04410f5-0ebb-4519-9806-a0210b9fdfdc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.919269 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vhsn2_8269d7d3-678d-44d5-885e-c5716e8024d8/console/0.log" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.919325 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.968342 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8269d7d3-678d-44d5-885e-c5716e8024d8-console-serving-cert\") pod \"8269d7d3-678d-44d5-885e-c5716e8024d8\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.968421 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2dgs\" (UniqueName: \"kubernetes.io/projected/8269d7d3-678d-44d5-885e-c5716e8024d8-kube-api-access-p2dgs\") pod \"8269d7d3-678d-44d5-885e-c5716e8024d8\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.968443 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-console-config\") pod \"8269d7d3-678d-44d5-885e-c5716e8024d8\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.968461 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-trusted-ca-bundle\") pod \"8269d7d3-678d-44d5-885e-c5716e8024d8\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.968478 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8269d7d3-678d-44d5-885e-c5716e8024d8-console-oauth-config\") pod \"8269d7d3-678d-44d5-885e-c5716e8024d8\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.968502 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-oauth-serving-cert\") pod \"8269d7d3-678d-44d5-885e-c5716e8024d8\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.968520 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-service-ca\") pod \"8269d7d3-678d-44d5-885e-c5716e8024d8\" (UID: \"8269d7d3-678d-44d5-885e-c5716e8024d8\") " Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.969338 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-service-ca" (OuterVolumeSpecName: "service-ca") pod "8269d7d3-678d-44d5-885e-c5716e8024d8" (UID: "8269d7d3-678d-44d5-885e-c5716e8024d8"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.969450 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-console-config" (OuterVolumeSpecName: "console-config") pod "8269d7d3-678d-44d5-885e-c5716e8024d8" (UID: "8269d7d3-678d-44d5-885e-c5716e8024d8"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.969747 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8269d7d3-678d-44d5-885e-c5716e8024d8" (UID: "8269d7d3-678d-44d5-885e-c5716e8024d8"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.970700 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8269d7d3-678d-44d5-885e-c5716e8024d8" (UID: "8269d7d3-678d-44d5-885e-c5716e8024d8"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.972413 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8269d7d3-678d-44d5-885e-c5716e8024d8-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8269d7d3-678d-44d5-885e-c5716e8024d8" (UID: "8269d7d3-678d-44d5-885e-c5716e8024d8"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.973344 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8269d7d3-678d-44d5-885e-c5716e8024d8-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8269d7d3-678d-44d5-885e-c5716e8024d8" (UID: "8269d7d3-678d-44d5-885e-c5716e8024d8"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:12:15 crc kubenswrapper[4844]: I0126 13:12:15.977143 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8269d7d3-678d-44d5-885e-c5716e8024d8-kube-api-access-p2dgs" (OuterVolumeSpecName: "kube-api-access-p2dgs") pod "8269d7d3-678d-44d5-885e-c5716e8024d8" (UID: "8269d7d3-678d-44d5-885e-c5716e8024d8"). InnerVolumeSpecName "kube-api-access-p2dgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.004171 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.069948 4844 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.069978 4844 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8269d7d3-678d-44d5-885e-c5716e8024d8-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.069991 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2dgs\" (UniqueName: \"kubernetes.io/projected/8269d7d3-678d-44d5-885e-c5716e8024d8-kube-api-access-p2dgs\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.070000 4844 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.070010 4844 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.070019 4844 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8269d7d3-678d-44d5-885e-c5716e8024d8-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.070027 4844 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8269d7d3-678d-44d5-885e-c5716e8024d8-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.416291 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j"] Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.580894 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" event={"ID":"a04410f5-0ebb-4519-9806-a0210b9fdfdc","Type":"ContainerStarted","Data":"99d221acd2c8ffc9d86a6c520b8e2c7e13eb2f9574f0579eb1aaf41c9ad9f561"} Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.582529 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vhsn2_8269d7d3-678d-44d5-885e-c5716e8024d8/console/0.log" Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.582575 4844 generic.go:334] "Generic (PLEG): container finished" podID="8269d7d3-678d-44d5-885e-c5716e8024d8" containerID="8f48e391126a27fc17f87108e0926de0cadaeafebd85ae862b34a557400870de" exitCode=2 Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.582638 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vhsn2" event={"ID":"8269d7d3-678d-44d5-885e-c5716e8024d8","Type":"ContainerDied","Data":"8f48e391126a27fc17f87108e0926de0cadaeafebd85ae862b34a557400870de"} Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.582681 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vhsn2" Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.582714 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vhsn2" event={"ID":"8269d7d3-678d-44d5-885e-c5716e8024d8","Type":"ContainerDied","Data":"518e032a28b7b5814efefee927465d7d479a8f18c62442e6d011f64c8a321648"} Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.582734 4844 scope.go:117] "RemoveContainer" containerID="8f48e391126a27fc17f87108e0926de0cadaeafebd85ae862b34a557400870de" Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.601778 4844 scope.go:117] "RemoveContainer" containerID="8f48e391126a27fc17f87108e0926de0cadaeafebd85ae862b34a557400870de" Jan 26 13:12:16 crc kubenswrapper[4844]: E0126 13:12:16.605223 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f48e391126a27fc17f87108e0926de0cadaeafebd85ae862b34a557400870de\": container with ID starting with 8f48e391126a27fc17f87108e0926de0cadaeafebd85ae862b34a557400870de not found: ID does not exist" containerID="8f48e391126a27fc17f87108e0926de0cadaeafebd85ae862b34a557400870de" Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.605255 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f48e391126a27fc17f87108e0926de0cadaeafebd85ae862b34a557400870de"} err="failed to get container status \"8f48e391126a27fc17f87108e0926de0cadaeafebd85ae862b34a557400870de\": rpc error: code = NotFound desc = could not find container \"8f48e391126a27fc17f87108e0926de0cadaeafebd85ae862b34a557400870de\": container with ID starting with 8f48e391126a27fc17f87108e0926de0cadaeafebd85ae862b34a557400870de not found: ID does not exist" Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.610163 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vhsn2"] Jan 26 13:12:16 crc kubenswrapper[4844]: I0126 13:12:16.621358 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-vhsn2"] Jan 26 13:12:17 crc kubenswrapper[4844]: I0126 13:12:17.320470 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8269d7d3-678d-44d5-885e-c5716e8024d8" path="/var/lib/kubelet/pods/8269d7d3-678d-44d5-885e-c5716e8024d8/volumes" Jan 26 13:12:19 crc kubenswrapper[4844]: I0126 13:12:19.602831 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" event={"ID":"a04410f5-0ebb-4519-9806-a0210b9fdfdc","Type":"ContainerStarted","Data":"3c906c9c5f98d4ac056025229d9e34d4a60dc422724dd3bf1181e41ba75fea26"} Jan 26 13:12:20 crc kubenswrapper[4844]: I0126 13:12:20.610873 4844 generic.go:334] "Generic (PLEG): container finished" podID="a04410f5-0ebb-4519-9806-a0210b9fdfdc" containerID="3c906c9c5f98d4ac056025229d9e34d4a60dc422724dd3bf1181e41ba75fea26" exitCode=0 Jan 26 13:12:20 crc kubenswrapper[4844]: I0126 13:12:20.610913 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" event={"ID":"a04410f5-0ebb-4519-9806-a0210b9fdfdc","Type":"ContainerDied","Data":"3c906c9c5f98d4ac056025229d9e34d4a60dc422724dd3bf1181e41ba75fea26"} Jan 26 13:12:22 crc kubenswrapper[4844]: I0126 13:12:22.627501 4844 generic.go:334] "Generic (PLEG): container finished" podID="a04410f5-0ebb-4519-9806-a0210b9fdfdc" containerID="1e934d556e83bb6bf8d081fb016f47ad36b26d95936ac0c902f30a4b06d7da4c" exitCode=0 Jan 26 13:12:22 crc kubenswrapper[4844]: I0126 13:12:22.627578 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" event={"ID":"a04410f5-0ebb-4519-9806-a0210b9fdfdc","Type":"ContainerDied","Data":"1e934d556e83bb6bf8d081fb016f47ad36b26d95936ac0c902f30a4b06d7da4c"} Jan 26 13:12:23 crc kubenswrapper[4844]: I0126 13:12:23.638916 4844 generic.go:334] "Generic (PLEG): container finished" podID="a04410f5-0ebb-4519-9806-a0210b9fdfdc" containerID="1c3b6e6609c11c914ad7537ceb28dbdc1de6bc14971da73621f6f130e5f19dc4" exitCode=0 Jan 26 13:12:23 crc kubenswrapper[4844]: I0126 13:12:23.639005 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" event={"ID":"a04410f5-0ebb-4519-9806-a0210b9fdfdc","Type":"ContainerDied","Data":"1c3b6e6609c11c914ad7537ceb28dbdc1de6bc14971da73621f6f130e5f19dc4"} Jan 26 13:12:24 crc kubenswrapper[4844]: I0126 13:12:24.314160 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:12:24 crc kubenswrapper[4844]: E0126 13:12:24.315177 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:12:24 crc kubenswrapper[4844]: I0126 13:12:24.947987 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" Jan 26 13:12:25 crc kubenswrapper[4844]: I0126 13:12:25.048659 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a04410f5-0ebb-4519-9806-a0210b9fdfdc-util\") pod \"a04410f5-0ebb-4519-9806-a0210b9fdfdc\" (UID: \"a04410f5-0ebb-4519-9806-a0210b9fdfdc\") " Jan 26 13:12:25 crc kubenswrapper[4844]: I0126 13:12:25.048785 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a04410f5-0ebb-4519-9806-a0210b9fdfdc-bundle\") pod \"a04410f5-0ebb-4519-9806-a0210b9fdfdc\" (UID: \"a04410f5-0ebb-4519-9806-a0210b9fdfdc\") " Jan 26 13:12:25 crc kubenswrapper[4844]: I0126 13:12:25.048935 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shwm8\" (UniqueName: \"kubernetes.io/projected/a04410f5-0ebb-4519-9806-a0210b9fdfdc-kube-api-access-shwm8\") pod \"a04410f5-0ebb-4519-9806-a0210b9fdfdc\" (UID: \"a04410f5-0ebb-4519-9806-a0210b9fdfdc\") " Jan 26 13:12:25 crc kubenswrapper[4844]: I0126 13:12:25.051140 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04410f5-0ebb-4519-9806-a0210b9fdfdc-bundle" (OuterVolumeSpecName: "bundle") pod "a04410f5-0ebb-4519-9806-a0210b9fdfdc" (UID: "a04410f5-0ebb-4519-9806-a0210b9fdfdc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:12:25 crc kubenswrapper[4844]: I0126 13:12:25.058458 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04410f5-0ebb-4519-9806-a0210b9fdfdc-kube-api-access-shwm8" (OuterVolumeSpecName: "kube-api-access-shwm8") pod "a04410f5-0ebb-4519-9806-a0210b9fdfdc" (UID: "a04410f5-0ebb-4519-9806-a0210b9fdfdc"). InnerVolumeSpecName "kube-api-access-shwm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:12:25 crc kubenswrapper[4844]: I0126 13:12:25.152397 4844 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a04410f5-0ebb-4519-9806-a0210b9fdfdc-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:25 crc kubenswrapper[4844]: I0126 13:12:25.152536 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shwm8\" (UniqueName: \"kubernetes.io/projected/a04410f5-0ebb-4519-9806-a0210b9fdfdc-kube-api-access-shwm8\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:25 crc kubenswrapper[4844]: I0126 13:12:25.195989 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04410f5-0ebb-4519-9806-a0210b9fdfdc-util" (OuterVolumeSpecName: "util") pod "a04410f5-0ebb-4519-9806-a0210b9fdfdc" (UID: "a04410f5-0ebb-4519-9806-a0210b9fdfdc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:12:25 crc kubenswrapper[4844]: I0126 13:12:25.254046 4844 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a04410f5-0ebb-4519-9806-a0210b9fdfdc-util\") on node \"crc\" DevicePath \"\"" Jan 26 13:12:25 crc kubenswrapper[4844]: I0126 13:12:25.657353 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" event={"ID":"a04410f5-0ebb-4519-9806-a0210b9fdfdc","Type":"ContainerDied","Data":"99d221acd2c8ffc9d86a6c520b8e2c7e13eb2f9574f0579eb1aaf41c9ad9f561"} Jan 26 13:12:25 crc kubenswrapper[4844]: I0126 13:12:25.657406 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d221acd2c8ffc9d86a6c520b8e2c7e13eb2f9574f0579eb1aaf41c9ad9f561" Jan 26 13:12:25 crc kubenswrapper[4844]: I0126 13:12:25.657472 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j" Jan 26 13:12:35 crc kubenswrapper[4844]: I0126 13:12:35.313716 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:12:35 crc kubenswrapper[4844]: E0126 13:12:35.314510 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.656407 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh"] Jan 26 13:12:36 crc kubenswrapper[4844]: E0126 13:12:36.657089 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8269d7d3-678d-44d5-885e-c5716e8024d8" containerName="console" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.657166 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="8269d7d3-678d-44d5-885e-c5716e8024d8" containerName="console" Jan 26 13:12:36 crc kubenswrapper[4844]: E0126 13:12:36.657225 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04410f5-0ebb-4519-9806-a0210b9fdfdc" containerName="extract" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.657278 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04410f5-0ebb-4519-9806-a0210b9fdfdc" containerName="extract" Jan 26 13:12:36 crc kubenswrapper[4844]: E0126 13:12:36.657354 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04410f5-0ebb-4519-9806-a0210b9fdfdc" containerName="pull" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.657426 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04410f5-0ebb-4519-9806-a0210b9fdfdc" containerName="pull" Jan 26 13:12:36 crc kubenswrapper[4844]: E0126 13:12:36.657485 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04410f5-0ebb-4519-9806-a0210b9fdfdc" containerName="util" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.657548 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04410f5-0ebb-4519-9806-a0210b9fdfdc" containerName="util" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.657729 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="a04410f5-0ebb-4519-9806-a0210b9fdfdc" containerName="extract" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.657799 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="8269d7d3-678d-44d5-885e-c5716e8024d8" containerName="console" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.658232 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.660624 4844 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.660667 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.662822 4844 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.663787 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.666888 4844 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-hgx78" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.668900 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh"] Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.701493 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/03a2059f-ed6b-49f5-9476-bf21d424567f-apiservice-cert\") pod \"metallb-operator-controller-manager-59ccf49fff-tmmnh\" (UID: \"03a2059f-ed6b-49f5-9476-bf21d424567f\") " pod="metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.701548 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/03a2059f-ed6b-49f5-9476-bf21d424567f-webhook-cert\") pod \"metallb-operator-controller-manager-59ccf49fff-tmmnh\" (UID: \"03a2059f-ed6b-49f5-9476-bf21d424567f\") " pod="metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.701574 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5559t\" (UniqueName: \"kubernetes.io/projected/03a2059f-ed6b-49f5-9476-bf21d424567f-kube-api-access-5559t\") pod \"metallb-operator-controller-manager-59ccf49fff-tmmnh\" (UID: \"03a2059f-ed6b-49f5-9476-bf21d424567f\") " pod="metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.802584 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5559t\" (UniqueName: \"kubernetes.io/projected/03a2059f-ed6b-49f5-9476-bf21d424567f-kube-api-access-5559t\") pod \"metallb-operator-controller-manager-59ccf49fff-tmmnh\" (UID: \"03a2059f-ed6b-49f5-9476-bf21d424567f\") " pod="metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.802709 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/03a2059f-ed6b-49f5-9476-bf21d424567f-apiservice-cert\") pod \"metallb-operator-controller-manager-59ccf49fff-tmmnh\" (UID: \"03a2059f-ed6b-49f5-9476-bf21d424567f\") " pod="metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.802733 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/03a2059f-ed6b-49f5-9476-bf21d424567f-webhook-cert\") pod \"metallb-operator-controller-manager-59ccf49fff-tmmnh\" (UID: \"03a2059f-ed6b-49f5-9476-bf21d424567f\") " pod="metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.808030 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/03a2059f-ed6b-49f5-9476-bf21d424567f-webhook-cert\") pod \"metallb-operator-controller-manager-59ccf49fff-tmmnh\" (UID: \"03a2059f-ed6b-49f5-9476-bf21d424567f\") " pod="metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.808645 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/03a2059f-ed6b-49f5-9476-bf21d424567f-apiservice-cert\") pod \"metallb-operator-controller-manager-59ccf49fff-tmmnh\" (UID: \"03a2059f-ed6b-49f5-9476-bf21d424567f\") " pod="metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.829972 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5559t\" (UniqueName: \"kubernetes.io/projected/03a2059f-ed6b-49f5-9476-bf21d424567f-kube-api-access-5559t\") pod \"metallb-operator-controller-manager-59ccf49fff-tmmnh\" (UID: \"03a2059f-ed6b-49f5-9476-bf21d424567f\") " pod="metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh" Jan 26 13:12:36 crc kubenswrapper[4844]: I0126 13:12:36.975944 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh" Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.022022 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-56567ff486-jdjng"] Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.022835 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-56567ff486-jdjng" Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.025854 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-56567ff486-jdjng"] Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.026078 4844 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.026186 4844 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.026235 4844 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-lftl2" Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.105678 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2d1458da-4eb4-4e5a-ae05-399cb9e40dda-apiservice-cert\") pod \"metallb-operator-webhook-server-56567ff486-jdjng\" (UID: \"2d1458da-4eb4-4e5a-ae05-399cb9e40dda\") " pod="metallb-system/metallb-operator-webhook-server-56567ff486-jdjng" Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.105776 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d1458da-4eb4-4e5a-ae05-399cb9e40dda-webhook-cert\") pod \"metallb-operator-webhook-server-56567ff486-jdjng\" (UID: \"2d1458da-4eb4-4e5a-ae05-399cb9e40dda\") " pod="metallb-system/metallb-operator-webhook-server-56567ff486-jdjng" Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.105807 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92nwp\" (UniqueName: \"kubernetes.io/projected/2d1458da-4eb4-4e5a-ae05-399cb9e40dda-kube-api-access-92nwp\") pod \"metallb-operator-webhook-server-56567ff486-jdjng\" (UID: \"2d1458da-4eb4-4e5a-ae05-399cb9e40dda\") " pod="metallb-system/metallb-operator-webhook-server-56567ff486-jdjng" Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.206708 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d1458da-4eb4-4e5a-ae05-399cb9e40dda-webhook-cert\") pod \"metallb-operator-webhook-server-56567ff486-jdjng\" (UID: \"2d1458da-4eb4-4e5a-ae05-399cb9e40dda\") " pod="metallb-system/metallb-operator-webhook-server-56567ff486-jdjng" Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.206768 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92nwp\" (UniqueName: \"kubernetes.io/projected/2d1458da-4eb4-4e5a-ae05-399cb9e40dda-kube-api-access-92nwp\") pod \"metallb-operator-webhook-server-56567ff486-jdjng\" (UID: \"2d1458da-4eb4-4e5a-ae05-399cb9e40dda\") " pod="metallb-system/metallb-operator-webhook-server-56567ff486-jdjng" Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.206848 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2d1458da-4eb4-4e5a-ae05-399cb9e40dda-apiservice-cert\") pod \"metallb-operator-webhook-server-56567ff486-jdjng\" (UID: \"2d1458da-4eb4-4e5a-ae05-399cb9e40dda\") " pod="metallb-system/metallb-operator-webhook-server-56567ff486-jdjng" Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.211522 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d1458da-4eb4-4e5a-ae05-399cb9e40dda-webhook-cert\") pod \"metallb-operator-webhook-server-56567ff486-jdjng\" (UID: \"2d1458da-4eb4-4e5a-ae05-399cb9e40dda\") " pod="metallb-system/metallb-operator-webhook-server-56567ff486-jdjng" Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.212844 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2d1458da-4eb4-4e5a-ae05-399cb9e40dda-apiservice-cert\") pod \"metallb-operator-webhook-server-56567ff486-jdjng\" (UID: \"2d1458da-4eb4-4e5a-ae05-399cb9e40dda\") " pod="metallb-system/metallb-operator-webhook-server-56567ff486-jdjng" Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.222243 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92nwp\" (UniqueName: \"kubernetes.io/projected/2d1458da-4eb4-4e5a-ae05-399cb9e40dda-kube-api-access-92nwp\") pod \"metallb-operator-webhook-server-56567ff486-jdjng\" (UID: \"2d1458da-4eb4-4e5a-ae05-399cb9e40dda\") " pod="metallb-system/metallb-operator-webhook-server-56567ff486-jdjng" Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.388533 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-56567ff486-jdjng" Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.525188 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh"] Jan 26 13:12:37 crc kubenswrapper[4844]: W0126 13:12:37.530518 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03a2059f_ed6b_49f5_9476_bf21d424567f.slice/crio-437414dc8be0fe3c1818f17bbf4dede5008c1bd17833918ee7b5d005650da8d4 WatchSource:0}: Error finding container 437414dc8be0fe3c1818f17bbf4dede5008c1bd17833918ee7b5d005650da8d4: Status 404 returned error can't find the container with id 437414dc8be0fe3c1818f17bbf4dede5008c1bd17833918ee7b5d005650da8d4 Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.728203 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh" event={"ID":"03a2059f-ed6b-49f5-9476-bf21d424567f","Type":"ContainerStarted","Data":"437414dc8be0fe3c1818f17bbf4dede5008c1bd17833918ee7b5d005650da8d4"} Jan 26 13:12:37 crc kubenswrapper[4844]: I0126 13:12:37.848307 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-56567ff486-jdjng"] Jan 26 13:12:38 crc kubenswrapper[4844]: I0126 13:12:38.738538 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-56567ff486-jdjng" event={"ID":"2d1458da-4eb4-4e5a-ae05-399cb9e40dda","Type":"ContainerStarted","Data":"1dfa7f9e0944e937df9123e1d2e9919ca9d3f8b76cf64ebf134739997388d787"} Jan 26 13:12:43 crc kubenswrapper[4844]: I0126 13:12:43.774966 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-56567ff486-jdjng" event={"ID":"2d1458da-4eb4-4e5a-ae05-399cb9e40dda","Type":"ContainerStarted","Data":"5cf2907148bc9327c9b5d96ab519c5135166f12031d88149810bfd3bbadff719"} Jan 26 13:12:43 crc kubenswrapper[4844]: I0126 13:12:43.775520 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-56567ff486-jdjng" Jan 26 13:12:43 crc kubenswrapper[4844]: I0126 13:12:43.777427 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh" event={"ID":"03a2059f-ed6b-49f5-9476-bf21d424567f","Type":"ContainerStarted","Data":"91296fe71787041adaad4e8d86304cd91d37add7c6173e049e920a36789426e9"} Jan 26 13:12:43 crc kubenswrapper[4844]: I0126 13:12:43.777656 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh" Jan 26 13:12:43 crc kubenswrapper[4844]: I0126 13:12:43.798409 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-56567ff486-jdjng" podStartSLOduration=2.341631759 podStartE2EDuration="7.798394555s" podCreationTimestamp="2026-01-26 13:12:36 +0000 UTC" firstStartedPulling="2026-01-26 13:12:37.857722695 +0000 UTC m=+1734.791090307" lastFinishedPulling="2026-01-26 13:12:43.314485491 +0000 UTC m=+1740.247853103" observedRunningTime="2026-01-26 13:12:43.793362873 +0000 UTC m=+1740.726730475" watchObservedRunningTime="2026-01-26 13:12:43.798394555 +0000 UTC m=+1740.731762167" Jan 26 13:12:48 crc kubenswrapper[4844]: I0126 13:12:48.313445 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:12:48 crc kubenswrapper[4844]: E0126 13:12:48.314230 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:12:57 crc kubenswrapper[4844]: I0126 13:12:57.395379 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-56567ff486-jdjng" Jan 26 13:12:57 crc kubenswrapper[4844]: I0126 13:12:57.420061 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh" podStartSLOduration=15.669055832 podStartE2EDuration="21.420036709s" podCreationTimestamp="2026-01-26 13:12:36 +0000 UTC" firstStartedPulling="2026-01-26 13:12:37.542221072 +0000 UTC m=+1734.475588684" lastFinishedPulling="2026-01-26 13:12:43.293201949 +0000 UTC m=+1740.226569561" observedRunningTime="2026-01-26 13:12:43.824179906 +0000 UTC m=+1740.757547538" watchObservedRunningTime="2026-01-26 13:12:57.420036709 +0000 UTC m=+1754.353404331" Jan 26 13:13:00 crc kubenswrapper[4844]: I0126 13:13:00.313097 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:13:00 crc kubenswrapper[4844]: E0126 13:13:00.313558 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:13:13 crc kubenswrapper[4844]: I0126 13:13:13.319887 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:13:13 crc kubenswrapper[4844]: E0126 13:13:13.321034 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:13:16 crc kubenswrapper[4844]: I0126 13:13:16.983411 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.714650 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-5tzp4"] Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.715629 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5tzp4" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.720981 4844 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-hlxtf" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.721120 4844 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.722467 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9wgh7"] Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.725635 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.727661 4844 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.729345 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.733126 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-5tzp4"] Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.812839 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-qtw5d"] Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.813989 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qtw5d" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.816491 4844 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-2hqw9" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.816648 4844 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.817136 4844 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.817232 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.843926 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-6qx7f"] Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.845091 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-6qx7f" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.849694 4844 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.865582 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-6qx7f"] Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.879840 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a82f578e-e9b6-4a4d-aade-25ba70bac11f-frr-sockets\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.879884 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsnnd\" (UniqueName: \"kubernetes.io/projected/08638bb5-906c-4f51-9437-8667d323feae-kube-api-access-lsnnd\") pod \"frr-k8s-webhook-server-7df86c4f6c-5tzp4\" (UID: \"08638bb5-906c-4f51-9437-8667d323feae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5tzp4" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.879907 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a82f578e-e9b6-4a4d-aade-25ba70bac11f-frr-conf\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.879931 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a82f578e-e9b6-4a4d-aade-25ba70bac11f-metrics-certs\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.879949 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a82f578e-e9b6-4a4d-aade-25ba70bac11f-reloader\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.880048 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a82f578e-e9b6-4a4d-aade-25ba70bac11f-frr-startup\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.880101 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08638bb5-906c-4f51-9437-8667d323feae-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5tzp4\" (UID: \"08638bb5-906c-4f51-9437-8667d323feae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5tzp4" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.880129 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gd7f\" (UniqueName: \"kubernetes.io/projected/a82f578e-e9b6-4a4d-aade-25ba70bac11f-kube-api-access-8gd7f\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.880221 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a82f578e-e9b6-4a4d-aade-25ba70bac11f-metrics\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.981474 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a5381cf1-7e94-4ac0-9054-ed80ebf76624-cert\") pod \"controller-6968d8fdc4-6qx7f\" (UID: \"a5381cf1-7e94-4ac0-9054-ed80ebf76624\") " pod="metallb-system/controller-6968d8fdc4-6qx7f" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.981852 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a82f578e-e9b6-4a4d-aade-25ba70bac11f-metrics\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.981979 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eadfd892-6882-4514-abcd-e68612f9eecf-metrics-certs\") pod \"speaker-qtw5d\" (UID: \"eadfd892-6882-4514-abcd-e68612f9eecf\") " pod="metallb-system/speaker-qtw5d" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.982162 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a82f578e-e9b6-4a4d-aade-25ba70bac11f-frr-sockets\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.983010 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsnnd\" (UniqueName: \"kubernetes.io/projected/08638bb5-906c-4f51-9437-8667d323feae-kube-api-access-lsnnd\") pod \"frr-k8s-webhook-server-7df86c4f6c-5tzp4\" (UID: \"08638bb5-906c-4f51-9437-8667d323feae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5tzp4" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.983180 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a82f578e-e9b6-4a4d-aade-25ba70bac11f-frr-conf\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.983413 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a82f578e-e9b6-4a4d-aade-25ba70bac11f-frr-conf\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.983075 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a82f578e-e9b6-4a4d-aade-25ba70bac11f-frr-sockets\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.983047 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a82f578e-e9b6-4a4d-aade-25ba70bac11f-metrics\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.984651 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/eadfd892-6882-4514-abcd-e68612f9eecf-memberlist\") pod \"speaker-qtw5d\" (UID: \"eadfd892-6882-4514-abcd-e68612f9eecf\") " pod="metallb-system/speaker-qtw5d" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.984752 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a82f578e-e9b6-4a4d-aade-25ba70bac11f-metrics-certs\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.984812 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a82f578e-e9b6-4a4d-aade-25ba70bac11f-reloader\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.984878 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/eadfd892-6882-4514-abcd-e68612f9eecf-metallb-excludel2\") pod \"speaker-qtw5d\" (UID: \"eadfd892-6882-4514-abcd-e68612f9eecf\") " pod="metallb-system/speaker-qtw5d" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.984907 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a82f578e-e9b6-4a4d-aade-25ba70bac11f-frr-startup\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.984941 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5381cf1-7e94-4ac0-9054-ed80ebf76624-metrics-certs\") pod \"controller-6968d8fdc4-6qx7f\" (UID: \"a5381cf1-7e94-4ac0-9054-ed80ebf76624\") " pod="metallb-system/controller-6968d8fdc4-6qx7f" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.984979 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08638bb5-906c-4f51-9437-8667d323feae-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5tzp4\" (UID: \"08638bb5-906c-4f51-9437-8667d323feae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5tzp4" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.985017 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gd7f\" (UniqueName: \"kubernetes.io/projected/a82f578e-e9b6-4a4d-aade-25ba70bac11f-kube-api-access-8gd7f\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.985058 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq8wr\" (UniqueName: \"kubernetes.io/projected/eadfd892-6882-4514-abcd-e68612f9eecf-kube-api-access-jq8wr\") pod \"speaker-qtw5d\" (UID: \"eadfd892-6882-4514-abcd-e68612f9eecf\") " pod="metallb-system/speaker-qtw5d" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.985109 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm8pd\" (UniqueName: \"kubernetes.io/projected/a5381cf1-7e94-4ac0-9054-ed80ebf76624-kube-api-access-dm8pd\") pod \"controller-6968d8fdc4-6qx7f\" (UID: \"a5381cf1-7e94-4ac0-9054-ed80ebf76624\") " pod="metallb-system/controller-6968d8fdc4-6qx7f" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.985514 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a82f578e-e9b6-4a4d-aade-25ba70bac11f-reloader\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.986788 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a82f578e-e9b6-4a4d-aade-25ba70bac11f-frr-startup\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.991585 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a82f578e-e9b6-4a4d-aade-25ba70bac11f-metrics-certs\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:17 crc kubenswrapper[4844]: I0126 13:13:17.991885 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08638bb5-906c-4f51-9437-8667d323feae-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5tzp4\" (UID: \"08638bb5-906c-4f51-9437-8667d323feae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5tzp4" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.005033 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsnnd\" (UniqueName: \"kubernetes.io/projected/08638bb5-906c-4f51-9437-8667d323feae-kube-api-access-lsnnd\") pod \"frr-k8s-webhook-server-7df86c4f6c-5tzp4\" (UID: \"08638bb5-906c-4f51-9437-8667d323feae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5tzp4" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.021452 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gd7f\" (UniqueName: \"kubernetes.io/projected/a82f578e-e9b6-4a4d-aade-25ba70bac11f-kube-api-access-8gd7f\") pod \"frr-k8s-9wgh7\" (UID: \"a82f578e-e9b6-4a4d-aade-25ba70bac11f\") " pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.037350 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5tzp4" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.044681 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.086838 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5381cf1-7e94-4ac0-9054-ed80ebf76624-metrics-certs\") pod \"controller-6968d8fdc4-6qx7f\" (UID: \"a5381cf1-7e94-4ac0-9054-ed80ebf76624\") " pod="metallb-system/controller-6968d8fdc4-6qx7f" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.086923 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq8wr\" (UniqueName: \"kubernetes.io/projected/eadfd892-6882-4514-abcd-e68612f9eecf-kube-api-access-jq8wr\") pod \"speaker-qtw5d\" (UID: \"eadfd892-6882-4514-abcd-e68612f9eecf\") " pod="metallb-system/speaker-qtw5d" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.086974 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm8pd\" (UniqueName: \"kubernetes.io/projected/a5381cf1-7e94-4ac0-9054-ed80ebf76624-kube-api-access-dm8pd\") pod \"controller-6968d8fdc4-6qx7f\" (UID: \"a5381cf1-7e94-4ac0-9054-ed80ebf76624\") " pod="metallb-system/controller-6968d8fdc4-6qx7f" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.087024 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a5381cf1-7e94-4ac0-9054-ed80ebf76624-cert\") pod \"controller-6968d8fdc4-6qx7f\" (UID: \"a5381cf1-7e94-4ac0-9054-ed80ebf76624\") " pod="metallb-system/controller-6968d8fdc4-6qx7f" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.087051 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eadfd892-6882-4514-abcd-e68612f9eecf-metrics-certs\") pod \"speaker-qtw5d\" (UID: \"eadfd892-6882-4514-abcd-e68612f9eecf\") " pod="metallb-system/speaker-qtw5d" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.087088 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/eadfd892-6882-4514-abcd-e68612f9eecf-memberlist\") pod \"speaker-qtw5d\" (UID: \"eadfd892-6882-4514-abcd-e68612f9eecf\") " pod="metallb-system/speaker-qtw5d" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.087144 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/eadfd892-6882-4514-abcd-e68612f9eecf-metallb-excludel2\") pod \"speaker-qtw5d\" (UID: \"eadfd892-6882-4514-abcd-e68612f9eecf\") " pod="metallb-system/speaker-qtw5d" Jan 26 13:13:18 crc kubenswrapper[4844]: E0126 13:13:18.087229 4844 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 13:13:18 crc kubenswrapper[4844]: E0126 13:13:18.087290 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eadfd892-6882-4514-abcd-e68612f9eecf-memberlist podName:eadfd892-6882-4514-abcd-e68612f9eecf nodeName:}" failed. No retries permitted until 2026-01-26 13:13:18.587272073 +0000 UTC m=+1775.520639785 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/eadfd892-6882-4514-abcd-e68612f9eecf-memberlist") pod "speaker-qtw5d" (UID: "eadfd892-6882-4514-abcd-e68612f9eecf") : secret "metallb-memberlist" not found Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.088245 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/eadfd892-6882-4514-abcd-e68612f9eecf-metallb-excludel2\") pod \"speaker-qtw5d\" (UID: \"eadfd892-6882-4514-abcd-e68612f9eecf\") " pod="metallb-system/speaker-qtw5d" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.090373 4844 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.097906 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eadfd892-6882-4514-abcd-e68612f9eecf-metrics-certs\") pod \"speaker-qtw5d\" (UID: \"eadfd892-6882-4514-abcd-e68612f9eecf\") " pod="metallb-system/speaker-qtw5d" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.100941 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5381cf1-7e94-4ac0-9054-ed80ebf76624-metrics-certs\") pod \"controller-6968d8fdc4-6qx7f\" (UID: \"a5381cf1-7e94-4ac0-9054-ed80ebf76624\") " pod="metallb-system/controller-6968d8fdc4-6qx7f" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.101551 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a5381cf1-7e94-4ac0-9054-ed80ebf76624-cert\") pod \"controller-6968d8fdc4-6qx7f\" (UID: \"a5381cf1-7e94-4ac0-9054-ed80ebf76624\") " pod="metallb-system/controller-6968d8fdc4-6qx7f" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.110699 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm8pd\" (UniqueName: \"kubernetes.io/projected/a5381cf1-7e94-4ac0-9054-ed80ebf76624-kube-api-access-dm8pd\") pod \"controller-6968d8fdc4-6qx7f\" (UID: \"a5381cf1-7e94-4ac0-9054-ed80ebf76624\") " pod="metallb-system/controller-6968d8fdc4-6qx7f" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.113349 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq8wr\" (UniqueName: \"kubernetes.io/projected/eadfd892-6882-4514-abcd-e68612f9eecf-kube-api-access-jq8wr\") pod \"speaker-qtw5d\" (UID: \"eadfd892-6882-4514-abcd-e68612f9eecf\") " pod="metallb-system/speaker-qtw5d" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.161075 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-6qx7f" Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.300444 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-5tzp4"] Jan 26 13:13:18 crc kubenswrapper[4844]: W0126 13:13:18.304262 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08638bb5_906c_4f51_9437_8667d323feae.slice/crio-24933b5a5fd50e8fae20983fd78b2604f4245380f2a72146fc94bd8930b110a0 WatchSource:0}: Error finding container 24933b5a5fd50e8fae20983fd78b2604f4245380f2a72146fc94bd8930b110a0: Status 404 returned error can't find the container with id 24933b5a5fd50e8fae20983fd78b2604f4245380f2a72146fc94bd8930b110a0 Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.597752 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/eadfd892-6882-4514-abcd-e68612f9eecf-memberlist\") pod \"speaker-qtw5d\" (UID: \"eadfd892-6882-4514-abcd-e68612f9eecf\") " pod="metallb-system/speaker-qtw5d" Jan 26 13:13:18 crc kubenswrapper[4844]: E0126 13:13:18.598278 4844 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 13:13:18 crc kubenswrapper[4844]: E0126 13:13:18.598370 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eadfd892-6882-4514-abcd-e68612f9eecf-memberlist podName:eadfd892-6882-4514-abcd-e68612f9eecf nodeName:}" failed. No retries permitted until 2026-01-26 13:13:19.598352781 +0000 UTC m=+1776.531720393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/eadfd892-6882-4514-abcd-e68612f9eecf-memberlist") pod "speaker-qtw5d" (UID: "eadfd892-6882-4514-abcd-e68612f9eecf") : secret "metallb-memberlist" not found Jan 26 13:13:18 crc kubenswrapper[4844]: I0126 13:13:18.632750 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-6qx7f"] Jan 26 13:13:18 crc kubenswrapper[4844]: W0126 13:13:18.633568 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5381cf1_7e94_4ac0_9054_ed80ebf76624.slice/crio-55b8c420b8b37b0b0d9379bad7346275cc65af92fb22fc87d9af99b829bee1f9 WatchSource:0}: Error finding container 55b8c420b8b37b0b0d9379bad7346275cc65af92fb22fc87d9af99b829bee1f9: Status 404 returned error can't find the container with id 55b8c420b8b37b0b0d9379bad7346275cc65af92fb22fc87d9af99b829bee1f9 Jan 26 13:13:19 crc kubenswrapper[4844]: I0126 13:13:19.022161 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5tzp4" event={"ID":"08638bb5-906c-4f51-9437-8667d323feae","Type":"ContainerStarted","Data":"24933b5a5fd50e8fae20983fd78b2604f4245380f2a72146fc94bd8930b110a0"} Jan 26 13:13:19 crc kubenswrapper[4844]: I0126 13:13:19.025215 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6qx7f" event={"ID":"a5381cf1-7e94-4ac0-9054-ed80ebf76624","Type":"ContainerStarted","Data":"dab8c1d2b3b4c9031dfeb296897716863c91bbf95c5e4b6e8f69d89215d048ba"} Jan 26 13:13:19 crc kubenswrapper[4844]: I0126 13:13:19.025275 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6qx7f" event={"ID":"a5381cf1-7e94-4ac0-9054-ed80ebf76624","Type":"ContainerStarted","Data":"7dae350d759130e4ed777cc25cc338f353fd26ed4d2d77dc9ae59c4122709d51"} Jan 26 13:13:19 crc kubenswrapper[4844]: I0126 13:13:19.025289 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6qx7f" event={"ID":"a5381cf1-7e94-4ac0-9054-ed80ebf76624","Type":"ContainerStarted","Data":"55b8c420b8b37b0b0d9379bad7346275cc65af92fb22fc87d9af99b829bee1f9"} Jan 26 13:13:19 crc kubenswrapper[4844]: I0126 13:13:19.025354 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-6qx7f" Jan 26 13:13:19 crc kubenswrapper[4844]: I0126 13:13:19.028830 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wgh7" event={"ID":"a82f578e-e9b6-4a4d-aade-25ba70bac11f","Type":"ContainerStarted","Data":"01fd1c3180b5d7275a86f72823b9367d92adbb92bfc5f72c41f24bcc912e4700"} Jan 26 13:13:19 crc kubenswrapper[4844]: I0126 13:13:19.610835 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/eadfd892-6882-4514-abcd-e68612f9eecf-memberlist\") pod \"speaker-qtw5d\" (UID: \"eadfd892-6882-4514-abcd-e68612f9eecf\") " pod="metallb-system/speaker-qtw5d" Jan 26 13:13:19 crc kubenswrapper[4844]: I0126 13:13:19.617208 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/eadfd892-6882-4514-abcd-e68612f9eecf-memberlist\") pod \"speaker-qtw5d\" (UID: \"eadfd892-6882-4514-abcd-e68612f9eecf\") " pod="metallb-system/speaker-qtw5d" Jan 26 13:13:19 crc kubenswrapper[4844]: I0126 13:13:19.635323 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qtw5d" Jan 26 13:13:19 crc kubenswrapper[4844]: W0126 13:13:19.658299 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeadfd892_6882_4514_abcd_e68612f9eecf.slice/crio-4ec8c5610a1e2be29c321d9a9efaabba7dbedd79fd0fc9847cbd12ec0631cfcd WatchSource:0}: Error finding container 4ec8c5610a1e2be29c321d9a9efaabba7dbedd79fd0fc9847cbd12ec0631cfcd: Status 404 returned error can't find the container with id 4ec8c5610a1e2be29c321d9a9efaabba7dbedd79fd0fc9847cbd12ec0631cfcd Jan 26 13:13:20 crc kubenswrapper[4844]: I0126 13:13:20.037872 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qtw5d" event={"ID":"eadfd892-6882-4514-abcd-e68612f9eecf","Type":"ContainerStarted","Data":"29e4632cf2153fa564c9d37ec28db024d99258edf622950d5ac8168cd0c964b1"} Jan 26 13:13:20 crc kubenswrapper[4844]: I0126 13:13:20.037915 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qtw5d" event={"ID":"eadfd892-6882-4514-abcd-e68612f9eecf","Type":"ContainerStarted","Data":"4ec8c5610a1e2be29c321d9a9efaabba7dbedd79fd0fc9847cbd12ec0631cfcd"} Jan 26 13:13:21 crc kubenswrapper[4844]: I0126 13:13:21.045063 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qtw5d" event={"ID":"eadfd892-6882-4514-abcd-e68612f9eecf","Type":"ContainerStarted","Data":"97435f79b6bee6cb8dc6bf68b4fc41a7226946065838dd918c885af4f2bbd4a2"} Jan 26 13:13:21 crc kubenswrapper[4844]: I0126 13:13:21.045674 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-qtw5d" Jan 26 13:13:21 crc kubenswrapper[4844]: I0126 13:13:21.064414 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-6qx7f" podStartSLOduration=4.064398096 podStartE2EDuration="4.064398096s" podCreationTimestamp="2026-01-26 13:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:13:19.04393253 +0000 UTC m=+1775.977300142" watchObservedRunningTime="2026-01-26 13:13:21.064398096 +0000 UTC m=+1777.997765698" Jan 26 13:13:21 crc kubenswrapper[4844]: I0126 13:13:21.065185 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-qtw5d" podStartSLOduration=4.065180834 podStartE2EDuration="4.065180834s" podCreationTimestamp="2026-01-26 13:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:13:21.063115605 +0000 UTC m=+1777.996483217" watchObservedRunningTime="2026-01-26 13:13:21.065180834 +0000 UTC m=+1777.998548446" Jan 26 13:13:27 crc kubenswrapper[4844]: I0126 13:13:27.098983 4844 generic.go:334] "Generic (PLEG): container finished" podID="a82f578e-e9b6-4a4d-aade-25ba70bac11f" containerID="83bd65bad5214e9bf52d3ccb660498407cc02b2d4e93b685234aa7c2ddc1e03b" exitCode=0 Jan 26 13:13:27 crc kubenswrapper[4844]: I0126 13:13:27.099074 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wgh7" event={"ID":"a82f578e-e9b6-4a4d-aade-25ba70bac11f","Type":"ContainerDied","Data":"83bd65bad5214e9bf52d3ccb660498407cc02b2d4e93b685234aa7c2ddc1e03b"} Jan 26 13:13:27 crc kubenswrapper[4844]: I0126 13:13:27.102920 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5tzp4" event={"ID":"08638bb5-906c-4f51-9437-8667d323feae","Type":"ContainerStarted","Data":"d2347a7e745f3fab5e300e75b82f6b7c6ad0ffc923bc44a2ad77a42496fb76eb"} Jan 26 13:13:27 crc kubenswrapper[4844]: I0126 13:13:27.103125 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5tzp4" Jan 26 13:13:27 crc kubenswrapper[4844]: I0126 13:13:27.155429 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5tzp4" podStartSLOduration=2.197263414 podStartE2EDuration="10.155402578s" podCreationTimestamp="2026-01-26 13:13:17 +0000 UTC" firstStartedPulling="2026-01-26 13:13:18.306427394 +0000 UTC m=+1775.239795016" lastFinishedPulling="2026-01-26 13:13:26.264566558 +0000 UTC m=+1783.197934180" observedRunningTime="2026-01-26 13:13:27.150541972 +0000 UTC m=+1784.083909624" watchObservedRunningTime="2026-01-26 13:13:27.155402578 +0000 UTC m=+1784.088770220" Jan 26 13:13:27 crc kubenswrapper[4844]: I0126 13:13:27.313303 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:13:27 crc kubenswrapper[4844]: E0126 13:13:27.313822 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:13:28 crc kubenswrapper[4844]: I0126 13:13:28.111210 4844 generic.go:334] "Generic (PLEG): container finished" podID="a82f578e-e9b6-4a4d-aade-25ba70bac11f" containerID="dc29fb38fa6c53c75ea69e2ce99bb97200ff5ab9634756701a0a94b7055b4ef7" exitCode=0 Jan 26 13:13:28 crc kubenswrapper[4844]: I0126 13:13:28.111338 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wgh7" event={"ID":"a82f578e-e9b6-4a4d-aade-25ba70bac11f","Type":"ContainerDied","Data":"dc29fb38fa6c53c75ea69e2ce99bb97200ff5ab9634756701a0a94b7055b4ef7"} Jan 26 13:13:28 crc kubenswrapper[4844]: I0126 13:13:28.165412 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-6qx7f" Jan 26 13:13:29 crc kubenswrapper[4844]: I0126 13:13:29.121461 4844 generic.go:334] "Generic (PLEG): container finished" podID="a82f578e-e9b6-4a4d-aade-25ba70bac11f" containerID="6969c33391790c1412c1c9bbea10bf46ee8a3f5f1f40d3fe46e92774c4ca2a06" exitCode=0 Jan 26 13:13:29 crc kubenswrapper[4844]: I0126 13:13:29.121575 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wgh7" event={"ID":"a82f578e-e9b6-4a4d-aade-25ba70bac11f","Type":"ContainerDied","Data":"6969c33391790c1412c1c9bbea10bf46ee8a3f5f1f40d3fe46e92774c4ca2a06"} Jan 26 13:13:29 crc kubenswrapper[4844]: I0126 13:13:29.639173 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-qtw5d" Jan 26 13:13:30 crc kubenswrapper[4844]: I0126 13:13:30.132917 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wgh7" event={"ID":"a82f578e-e9b6-4a4d-aade-25ba70bac11f","Type":"ContainerStarted","Data":"a394d5c562f3d1a949350d50a0783bd4851e71f6e869093e3ec3c40a41325556"} Jan 26 13:13:30 crc kubenswrapper[4844]: I0126 13:13:30.132974 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wgh7" event={"ID":"a82f578e-e9b6-4a4d-aade-25ba70bac11f","Type":"ContainerStarted","Data":"66cf2cf2c4c2a8d2c9b4a2492bb9bc1ee1976b3dd5fa81eba4133d544d925514"} Jan 26 13:13:30 crc kubenswrapper[4844]: I0126 13:13:30.132997 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wgh7" event={"ID":"a82f578e-e9b6-4a4d-aade-25ba70bac11f","Type":"ContainerStarted","Data":"379d6cd81ae039187f882e8a6923adeadf733edf7de959b822cab2aefc2cc7e8"} Jan 26 13:13:30 crc kubenswrapper[4844]: I0126 13:13:30.133016 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wgh7" event={"ID":"a82f578e-e9b6-4a4d-aade-25ba70bac11f","Type":"ContainerStarted","Data":"6a994f7b1109629befa17ef633b3d255d2d35e12d8df33f51d802c052f412f1e"} Jan 26 13:13:30 crc kubenswrapper[4844]: I0126 13:13:30.133032 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wgh7" event={"ID":"a82f578e-e9b6-4a4d-aade-25ba70bac11f","Type":"ContainerStarted","Data":"1b379a76f17b6efe956e80ffaa6f5fdc4afe96ac4469340e8fa3db98d8700e7b"} Jan 26 13:13:31 crc kubenswrapper[4844]: I0126 13:13:31.142929 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wgh7" event={"ID":"a82f578e-e9b6-4a4d-aade-25ba70bac11f","Type":"ContainerStarted","Data":"33f3008153c08f3b968b5d903b6dee8853126027572e30b2823464968812f075"} Jan 26 13:13:31 crc kubenswrapper[4844]: I0126 13:13:31.143296 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:31 crc kubenswrapper[4844]: I0126 13:13:31.173259 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9wgh7" podStartSLOduration=6.451049888 podStartE2EDuration="14.173240075s" podCreationTimestamp="2026-01-26 13:13:17 +0000 UTC" firstStartedPulling="2026-01-26 13:13:18.53693034 +0000 UTC m=+1775.470297952" lastFinishedPulling="2026-01-26 13:13:26.259120527 +0000 UTC m=+1783.192488139" observedRunningTime="2026-01-26 13:13:31.169249379 +0000 UTC m=+1788.102617001" watchObservedRunningTime="2026-01-26 13:13:31.173240075 +0000 UTC m=+1788.106607687" Jan 26 13:13:32 crc kubenswrapper[4844]: I0126 13:13:32.655404 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ffk8p"] Jan 26 13:13:32 crc kubenswrapper[4844]: I0126 13:13:32.656311 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ffk8p" Jan 26 13:13:32 crc kubenswrapper[4844]: I0126 13:13:32.658353 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 13:13:32 crc kubenswrapper[4844]: I0126 13:13:32.658910 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-g7fw2" Jan 26 13:13:32 crc kubenswrapper[4844]: I0126 13:13:32.658949 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 13:13:32 crc kubenswrapper[4844]: I0126 13:13:32.670810 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ffk8p"] Jan 26 13:13:32 crc kubenswrapper[4844]: I0126 13:13:32.822964 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k8xn\" (UniqueName: \"kubernetes.io/projected/71511e76-e0ea-457c-801c-f78551e505f5-kube-api-access-9k8xn\") pod \"openstack-operator-index-ffk8p\" (UID: \"71511e76-e0ea-457c-801c-f78551e505f5\") " pod="openstack-operators/openstack-operator-index-ffk8p" Jan 26 13:13:32 crc kubenswrapper[4844]: I0126 13:13:32.923888 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k8xn\" (UniqueName: \"kubernetes.io/projected/71511e76-e0ea-457c-801c-f78551e505f5-kube-api-access-9k8xn\") pod \"openstack-operator-index-ffk8p\" (UID: \"71511e76-e0ea-457c-801c-f78551e505f5\") " pod="openstack-operators/openstack-operator-index-ffk8p" Jan 26 13:13:32 crc kubenswrapper[4844]: I0126 13:13:32.941973 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k8xn\" (UniqueName: \"kubernetes.io/projected/71511e76-e0ea-457c-801c-f78551e505f5-kube-api-access-9k8xn\") pod \"openstack-operator-index-ffk8p\" (UID: \"71511e76-e0ea-457c-801c-f78551e505f5\") " pod="openstack-operators/openstack-operator-index-ffk8p" Jan 26 13:13:32 crc kubenswrapper[4844]: I0126 13:13:32.972401 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ffk8p" Jan 26 13:13:33 crc kubenswrapper[4844]: I0126 13:13:33.045731 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:33 crc kubenswrapper[4844]: I0126 13:13:33.109268 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:33 crc kubenswrapper[4844]: I0126 13:13:33.423813 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ffk8p"] Jan 26 13:13:33 crc kubenswrapper[4844]: W0126 13:13:33.425959 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71511e76_e0ea_457c_801c_f78551e505f5.slice/crio-701b23b0c8dcc442ae475255847a4b69a03063b41f1cc0bc6a67a26df36f36f6 WatchSource:0}: Error finding container 701b23b0c8dcc442ae475255847a4b69a03063b41f1cc0bc6a67a26df36f36f6: Status 404 returned error can't find the container with id 701b23b0c8dcc442ae475255847a4b69a03063b41f1cc0bc6a67a26df36f36f6 Jan 26 13:13:34 crc kubenswrapper[4844]: I0126 13:13:34.165096 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ffk8p" event={"ID":"71511e76-e0ea-457c-801c-f78551e505f5","Type":"ContainerStarted","Data":"701b23b0c8dcc442ae475255847a4b69a03063b41f1cc0bc6a67a26df36f36f6"} Jan 26 13:13:35 crc kubenswrapper[4844]: I0126 13:13:35.842625 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-ffk8p"] Jan 26 13:13:36 crc kubenswrapper[4844]: I0126 13:13:36.182417 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ffk8p" event={"ID":"71511e76-e0ea-457c-801c-f78551e505f5","Type":"ContainerStarted","Data":"1f4f3693d7f5429f1c9879af0f32b520406eaab0c911b10326120e3be413c06a"} Jan 26 13:13:36 crc kubenswrapper[4844]: I0126 13:13:36.215142 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ffk8p" podStartSLOduration=2.448423324 podStartE2EDuration="4.215118004s" podCreationTimestamp="2026-01-26 13:13:32 +0000 UTC" firstStartedPulling="2026-01-26 13:13:33.431670447 +0000 UTC m=+1790.365038069" lastFinishedPulling="2026-01-26 13:13:35.198365117 +0000 UTC m=+1792.131732749" observedRunningTime="2026-01-26 13:13:36.210709408 +0000 UTC m=+1793.144077100" watchObservedRunningTime="2026-01-26 13:13:36.215118004 +0000 UTC m=+1793.148485626" Jan 26 13:13:36 crc kubenswrapper[4844]: I0126 13:13:36.443782 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nql7g"] Jan 26 13:13:36 crc kubenswrapper[4844]: I0126 13:13:36.444552 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nql7g" Jan 26 13:13:36 crc kubenswrapper[4844]: I0126 13:13:36.459723 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nql7g"] Jan 26 13:13:36 crc kubenswrapper[4844]: I0126 13:13:36.584828 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr86b\" (UniqueName: \"kubernetes.io/projected/bfb7276b-b13e-43c2-ae22-0165b6e3a68f-kube-api-access-vr86b\") pod \"openstack-operator-index-nql7g\" (UID: \"bfb7276b-b13e-43c2-ae22-0165b6e3a68f\") " pod="openstack-operators/openstack-operator-index-nql7g" Jan 26 13:13:36 crc kubenswrapper[4844]: I0126 13:13:36.686266 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr86b\" (UniqueName: \"kubernetes.io/projected/bfb7276b-b13e-43c2-ae22-0165b6e3a68f-kube-api-access-vr86b\") pod \"openstack-operator-index-nql7g\" (UID: \"bfb7276b-b13e-43c2-ae22-0165b6e3a68f\") " pod="openstack-operators/openstack-operator-index-nql7g" Jan 26 13:13:36 crc kubenswrapper[4844]: I0126 13:13:36.703841 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr86b\" (UniqueName: \"kubernetes.io/projected/bfb7276b-b13e-43c2-ae22-0165b6e3a68f-kube-api-access-vr86b\") pod \"openstack-operator-index-nql7g\" (UID: \"bfb7276b-b13e-43c2-ae22-0165b6e3a68f\") " pod="openstack-operators/openstack-operator-index-nql7g" Jan 26 13:13:36 crc kubenswrapper[4844]: I0126 13:13:36.772933 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nql7g" Jan 26 13:13:36 crc kubenswrapper[4844]: I0126 13:13:36.992749 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nql7g"] Jan 26 13:13:36 crc kubenswrapper[4844]: W0126 13:13:36.996864 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfb7276b_b13e_43c2_ae22_0165b6e3a68f.slice/crio-813994137c6f242b663603cebd6dcbef0ac8e579f0be6d4326f91f2d9f660713 WatchSource:0}: Error finding container 813994137c6f242b663603cebd6dcbef0ac8e579f0be6d4326f91f2d9f660713: Status 404 returned error can't find the container with id 813994137c6f242b663603cebd6dcbef0ac8e579f0be6d4326f91f2d9f660713 Jan 26 13:13:37 crc kubenswrapper[4844]: I0126 13:13:37.190351 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nql7g" event={"ID":"bfb7276b-b13e-43c2-ae22-0165b6e3a68f","Type":"ContainerStarted","Data":"813994137c6f242b663603cebd6dcbef0ac8e579f0be6d4326f91f2d9f660713"} Jan 26 13:13:37 crc kubenswrapper[4844]: I0126 13:13:37.190492 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-ffk8p" podUID="71511e76-e0ea-457c-801c-f78551e505f5" containerName="registry-server" containerID="cri-o://1f4f3693d7f5429f1c9879af0f32b520406eaab0c911b10326120e3be413c06a" gracePeriod=2 Jan 26 13:13:37 crc kubenswrapper[4844]: I0126 13:13:37.517907 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ffk8p" Jan 26 13:13:37 crc kubenswrapper[4844]: I0126 13:13:37.699675 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k8xn\" (UniqueName: \"kubernetes.io/projected/71511e76-e0ea-457c-801c-f78551e505f5-kube-api-access-9k8xn\") pod \"71511e76-e0ea-457c-801c-f78551e505f5\" (UID: \"71511e76-e0ea-457c-801c-f78551e505f5\") " Jan 26 13:13:37 crc kubenswrapper[4844]: I0126 13:13:37.706484 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71511e76-e0ea-457c-801c-f78551e505f5-kube-api-access-9k8xn" (OuterVolumeSpecName: "kube-api-access-9k8xn") pod "71511e76-e0ea-457c-801c-f78551e505f5" (UID: "71511e76-e0ea-457c-801c-f78551e505f5"). InnerVolumeSpecName "kube-api-access-9k8xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:13:37 crc kubenswrapper[4844]: I0126 13:13:37.801979 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k8xn\" (UniqueName: \"kubernetes.io/projected/71511e76-e0ea-457c-801c-f78551e505f5-kube-api-access-9k8xn\") on node \"crc\" DevicePath \"\"" Jan 26 13:13:38 crc kubenswrapper[4844]: I0126 13:13:38.043028 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5tzp4" Jan 26 13:13:38 crc kubenswrapper[4844]: I0126 13:13:38.197092 4844 generic.go:334] "Generic (PLEG): container finished" podID="71511e76-e0ea-457c-801c-f78551e505f5" containerID="1f4f3693d7f5429f1c9879af0f32b520406eaab0c911b10326120e3be413c06a" exitCode=0 Jan 26 13:13:38 crc kubenswrapper[4844]: I0126 13:13:38.197150 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ffk8p" Jan 26 13:13:38 crc kubenswrapper[4844]: I0126 13:13:38.197154 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ffk8p" event={"ID":"71511e76-e0ea-457c-801c-f78551e505f5","Type":"ContainerDied","Data":"1f4f3693d7f5429f1c9879af0f32b520406eaab0c911b10326120e3be413c06a"} Jan 26 13:13:38 crc kubenswrapper[4844]: I0126 13:13:38.197285 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ffk8p" event={"ID":"71511e76-e0ea-457c-801c-f78551e505f5","Type":"ContainerDied","Data":"701b23b0c8dcc442ae475255847a4b69a03063b41f1cc0bc6a67a26df36f36f6"} Jan 26 13:13:38 crc kubenswrapper[4844]: I0126 13:13:38.197310 4844 scope.go:117] "RemoveContainer" containerID="1f4f3693d7f5429f1c9879af0f32b520406eaab0c911b10326120e3be413c06a" Jan 26 13:13:38 crc kubenswrapper[4844]: I0126 13:13:38.198828 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nql7g" event={"ID":"bfb7276b-b13e-43c2-ae22-0165b6e3a68f","Type":"ContainerStarted","Data":"6678e3eb68c9b2d5d6603eabe7771e3cf7aa52b1981d2f12e64408e9f1f9828c"} Jan 26 13:13:38 crc kubenswrapper[4844]: I0126 13:13:38.215878 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nql7g" podStartSLOduration=2.139611648 podStartE2EDuration="2.215853776s" podCreationTimestamp="2026-01-26 13:13:36 +0000 UTC" firstStartedPulling="2026-01-26 13:13:37.003024384 +0000 UTC m=+1793.936391996" lastFinishedPulling="2026-01-26 13:13:37.079266512 +0000 UTC m=+1794.012634124" observedRunningTime="2026-01-26 13:13:38.213641253 +0000 UTC m=+1795.147008895" watchObservedRunningTime="2026-01-26 13:13:38.215853776 +0000 UTC m=+1795.149221398" Jan 26 13:13:38 crc kubenswrapper[4844]: I0126 13:13:38.217111 4844 scope.go:117] "RemoveContainer" containerID="1f4f3693d7f5429f1c9879af0f32b520406eaab0c911b10326120e3be413c06a" Jan 26 13:13:38 crc kubenswrapper[4844]: E0126 13:13:38.217864 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f4f3693d7f5429f1c9879af0f32b520406eaab0c911b10326120e3be413c06a\": container with ID starting with 1f4f3693d7f5429f1c9879af0f32b520406eaab0c911b10326120e3be413c06a not found: ID does not exist" containerID="1f4f3693d7f5429f1c9879af0f32b520406eaab0c911b10326120e3be413c06a" Jan 26 13:13:38 crc kubenswrapper[4844]: I0126 13:13:38.217901 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f4f3693d7f5429f1c9879af0f32b520406eaab0c911b10326120e3be413c06a"} err="failed to get container status \"1f4f3693d7f5429f1c9879af0f32b520406eaab0c911b10326120e3be413c06a\": rpc error: code = NotFound desc = could not find container \"1f4f3693d7f5429f1c9879af0f32b520406eaab0c911b10326120e3be413c06a\": container with ID starting with 1f4f3693d7f5429f1c9879af0f32b520406eaab0c911b10326120e3be413c06a not found: ID does not exist" Jan 26 13:13:38 crc kubenswrapper[4844]: I0126 13:13:38.234700 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-ffk8p"] Jan 26 13:13:38 crc kubenswrapper[4844]: I0126 13:13:38.239648 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-ffk8p"] Jan 26 13:13:38 crc kubenswrapper[4844]: I0126 13:13:38.312558 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:13:38 crc kubenswrapper[4844]: E0126 13:13:38.312875 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:13:39 crc kubenswrapper[4844]: I0126 13:13:39.324476 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71511e76-e0ea-457c-801c-f78551e505f5" path="/var/lib/kubelet/pods/71511e76-e0ea-457c-801c-f78551e505f5/volumes" Jan 26 13:13:46 crc kubenswrapper[4844]: I0126 13:13:46.773863 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-nql7g" Jan 26 13:13:46 crc kubenswrapper[4844]: I0126 13:13:46.774366 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-nql7g" Jan 26 13:13:46 crc kubenswrapper[4844]: I0126 13:13:46.799019 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-nql7g" Jan 26 13:13:47 crc kubenswrapper[4844]: I0126 13:13:47.300013 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-nql7g" Jan 26 13:13:48 crc kubenswrapper[4844]: I0126 13:13:48.049765 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9wgh7" Jan 26 13:13:49 crc kubenswrapper[4844]: I0126 13:13:49.083777 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq"] Jan 26 13:13:49 crc kubenswrapper[4844]: E0126 13:13:49.084428 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71511e76-e0ea-457c-801c-f78551e505f5" containerName="registry-server" Jan 26 13:13:49 crc kubenswrapper[4844]: I0126 13:13:49.084449 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="71511e76-e0ea-457c-801c-f78551e505f5" containerName="registry-server" Jan 26 13:13:49 crc kubenswrapper[4844]: I0126 13:13:49.084656 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="71511e76-e0ea-457c-801c-f78551e505f5" containerName="registry-server" Jan 26 13:13:49 crc kubenswrapper[4844]: I0126 13:13:49.085844 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq" Jan 26 13:13:49 crc kubenswrapper[4844]: I0126 13:13:49.088508 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-c7rpw" Jan 26 13:13:49 crc kubenswrapper[4844]: I0126 13:13:49.090578 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq"] Jan 26 13:13:49 crc kubenswrapper[4844]: I0126 13:13:49.185969 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm2zd\" (UniqueName: \"kubernetes.io/projected/22fcada7-92af-4edd-903e-8706cffecc6c-kube-api-access-zm2zd\") pod \"5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq\" (UID: \"22fcada7-92af-4edd-903e-8706cffecc6c\") " pod="openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq" Jan 26 13:13:49 crc kubenswrapper[4844]: I0126 13:13:49.186074 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/22fcada7-92af-4edd-903e-8706cffecc6c-util\") pod \"5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq\" (UID: \"22fcada7-92af-4edd-903e-8706cffecc6c\") " pod="openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq" Jan 26 13:13:49 crc kubenswrapper[4844]: I0126 13:13:49.186351 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/22fcada7-92af-4edd-903e-8706cffecc6c-bundle\") pod \"5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq\" (UID: \"22fcada7-92af-4edd-903e-8706cffecc6c\") " pod="openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq" Jan 26 13:13:49 crc kubenswrapper[4844]: I0126 13:13:49.287571 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm2zd\" (UniqueName: \"kubernetes.io/projected/22fcada7-92af-4edd-903e-8706cffecc6c-kube-api-access-zm2zd\") pod \"5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq\" (UID: \"22fcada7-92af-4edd-903e-8706cffecc6c\") " pod="openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq" Jan 26 13:13:49 crc kubenswrapper[4844]: I0126 13:13:49.287691 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/22fcada7-92af-4edd-903e-8706cffecc6c-util\") pod \"5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq\" (UID: \"22fcada7-92af-4edd-903e-8706cffecc6c\") " pod="openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq" Jan 26 13:13:49 crc kubenswrapper[4844]: I0126 13:13:49.287814 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/22fcada7-92af-4edd-903e-8706cffecc6c-bundle\") pod \"5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq\" (UID: \"22fcada7-92af-4edd-903e-8706cffecc6c\") " pod="openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq" Jan 26 13:13:49 crc kubenswrapper[4844]: I0126 13:13:49.288796 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/22fcada7-92af-4edd-903e-8706cffecc6c-util\") pod \"5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq\" (UID: \"22fcada7-92af-4edd-903e-8706cffecc6c\") " pod="openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq" Jan 26 13:13:49 crc kubenswrapper[4844]: I0126 13:13:49.288989 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/22fcada7-92af-4edd-903e-8706cffecc6c-bundle\") pod \"5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq\" (UID: \"22fcada7-92af-4edd-903e-8706cffecc6c\") " pod="openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq" Jan 26 13:13:49 crc kubenswrapper[4844]: I0126 13:13:49.307331 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm2zd\" (UniqueName: \"kubernetes.io/projected/22fcada7-92af-4edd-903e-8706cffecc6c-kube-api-access-zm2zd\") pod \"5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq\" (UID: \"22fcada7-92af-4edd-903e-8706cffecc6c\") " pod="openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq" Jan 26 13:13:49 crc kubenswrapper[4844]: I0126 13:13:49.401309 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq" Jan 26 13:13:49 crc kubenswrapper[4844]: I0126 13:13:49.605619 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq"] Jan 26 13:13:49 crc kubenswrapper[4844]: W0126 13:13:49.611677 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22fcada7_92af_4edd_903e_8706cffecc6c.slice/crio-4c3806a3a3e3f13aaa7e2455009f79a59f1acf59ba5d7d129edb9d5de7685712 WatchSource:0}: Error finding container 4c3806a3a3e3f13aaa7e2455009f79a59f1acf59ba5d7d129edb9d5de7685712: Status 404 returned error can't find the container with id 4c3806a3a3e3f13aaa7e2455009f79a59f1acf59ba5d7d129edb9d5de7685712 Jan 26 13:13:50 crc kubenswrapper[4844]: I0126 13:13:50.301674 4844 generic.go:334] "Generic (PLEG): container finished" podID="22fcada7-92af-4edd-903e-8706cffecc6c" containerID="96d9c3f7b36777b7370ae17376b44ee5822650ef482696e056c4af8304e0c3a7" exitCode=0 Jan 26 13:13:50 crc kubenswrapper[4844]: I0126 13:13:50.301794 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq" event={"ID":"22fcada7-92af-4edd-903e-8706cffecc6c","Type":"ContainerDied","Data":"96d9c3f7b36777b7370ae17376b44ee5822650ef482696e056c4af8304e0c3a7"} Jan 26 13:13:50 crc kubenswrapper[4844]: I0126 13:13:50.302153 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq" event={"ID":"22fcada7-92af-4edd-903e-8706cffecc6c","Type":"ContainerStarted","Data":"4c3806a3a3e3f13aaa7e2455009f79a59f1acf59ba5d7d129edb9d5de7685712"} Jan 26 13:13:51 crc kubenswrapper[4844]: I0126 13:13:51.311854 4844 generic.go:334] "Generic (PLEG): container finished" podID="22fcada7-92af-4edd-903e-8706cffecc6c" containerID="76b4ce6432d1fc9b9a5cb3a63e48a47a02591bcf2b2a94124affb9794594b83f" exitCode=0 Jan 26 13:13:51 crc kubenswrapper[4844]: I0126 13:13:51.312226 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq" event={"ID":"22fcada7-92af-4edd-903e-8706cffecc6c","Type":"ContainerDied","Data":"76b4ce6432d1fc9b9a5cb3a63e48a47a02591bcf2b2a94124affb9794594b83f"} Jan 26 13:13:51 crc kubenswrapper[4844]: I0126 13:13:51.313188 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:13:51 crc kubenswrapper[4844]: E0126 13:13:51.313352 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:13:52 crc kubenswrapper[4844]: I0126 13:13:52.335632 4844 generic.go:334] "Generic (PLEG): container finished" podID="22fcada7-92af-4edd-903e-8706cffecc6c" containerID="77d1e9dfa7c50361aaf9bff108db99e84e4a24de2441be419d51c58bcff5b6cc" exitCode=0 Jan 26 13:13:52 crc kubenswrapper[4844]: I0126 13:13:52.335735 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq" event={"ID":"22fcada7-92af-4edd-903e-8706cffecc6c","Type":"ContainerDied","Data":"77d1e9dfa7c50361aaf9bff108db99e84e4a24de2441be419d51c58bcff5b6cc"} Jan 26 13:13:53 crc kubenswrapper[4844]: I0126 13:13:53.628044 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq" Jan 26 13:13:53 crc kubenswrapper[4844]: I0126 13:13:53.765034 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/22fcada7-92af-4edd-903e-8706cffecc6c-util\") pod \"22fcada7-92af-4edd-903e-8706cffecc6c\" (UID: \"22fcada7-92af-4edd-903e-8706cffecc6c\") " Jan 26 13:13:53 crc kubenswrapper[4844]: I0126 13:13:53.765143 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm2zd\" (UniqueName: \"kubernetes.io/projected/22fcada7-92af-4edd-903e-8706cffecc6c-kube-api-access-zm2zd\") pod \"22fcada7-92af-4edd-903e-8706cffecc6c\" (UID: \"22fcada7-92af-4edd-903e-8706cffecc6c\") " Jan 26 13:13:53 crc kubenswrapper[4844]: I0126 13:13:53.765215 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/22fcada7-92af-4edd-903e-8706cffecc6c-bundle\") pod \"22fcada7-92af-4edd-903e-8706cffecc6c\" (UID: \"22fcada7-92af-4edd-903e-8706cffecc6c\") " Jan 26 13:13:53 crc kubenswrapper[4844]: I0126 13:13:53.766476 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22fcada7-92af-4edd-903e-8706cffecc6c-bundle" (OuterVolumeSpecName: "bundle") pod "22fcada7-92af-4edd-903e-8706cffecc6c" (UID: "22fcada7-92af-4edd-903e-8706cffecc6c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:13:53 crc kubenswrapper[4844]: I0126 13:13:53.771262 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22fcada7-92af-4edd-903e-8706cffecc6c-kube-api-access-zm2zd" (OuterVolumeSpecName: "kube-api-access-zm2zd") pod "22fcada7-92af-4edd-903e-8706cffecc6c" (UID: "22fcada7-92af-4edd-903e-8706cffecc6c"). InnerVolumeSpecName "kube-api-access-zm2zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:13:53 crc kubenswrapper[4844]: I0126 13:13:53.787537 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22fcada7-92af-4edd-903e-8706cffecc6c-util" (OuterVolumeSpecName: "util") pod "22fcada7-92af-4edd-903e-8706cffecc6c" (UID: "22fcada7-92af-4edd-903e-8706cffecc6c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:13:53 crc kubenswrapper[4844]: I0126 13:13:53.867323 4844 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/22fcada7-92af-4edd-903e-8706cffecc6c-util\") on node \"crc\" DevicePath \"\"" Jan 26 13:13:53 crc kubenswrapper[4844]: I0126 13:13:53.867362 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm2zd\" (UniqueName: \"kubernetes.io/projected/22fcada7-92af-4edd-903e-8706cffecc6c-kube-api-access-zm2zd\") on node \"crc\" DevicePath \"\"" Jan 26 13:13:53 crc kubenswrapper[4844]: I0126 13:13:53.867372 4844 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/22fcada7-92af-4edd-903e-8706cffecc6c-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:13:54 crc kubenswrapper[4844]: I0126 13:13:54.351859 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq" event={"ID":"22fcada7-92af-4edd-903e-8706cffecc6c","Type":"ContainerDied","Data":"4c3806a3a3e3f13aaa7e2455009f79a59f1acf59ba5d7d129edb9d5de7685712"} Jan 26 13:13:54 crc kubenswrapper[4844]: I0126 13:13:54.351930 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c3806a3a3e3f13aaa7e2455009f79a59f1acf59ba5d7d129edb9d5de7685712" Jan 26 13:13:54 crc kubenswrapper[4844]: I0126 13:13:54.352437 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq" Jan 26 13:14:01 crc kubenswrapper[4844]: I0126 13:14:01.188522 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-54d8cfbbfb-9bfgj"] Jan 26 13:14:01 crc kubenswrapper[4844]: E0126 13:14:01.189452 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fcada7-92af-4edd-903e-8706cffecc6c" containerName="pull" Jan 26 13:14:01 crc kubenswrapper[4844]: I0126 13:14:01.189467 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fcada7-92af-4edd-903e-8706cffecc6c" containerName="pull" Jan 26 13:14:01 crc kubenswrapper[4844]: E0126 13:14:01.189482 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fcada7-92af-4edd-903e-8706cffecc6c" containerName="util" Jan 26 13:14:01 crc kubenswrapper[4844]: I0126 13:14:01.189490 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fcada7-92af-4edd-903e-8706cffecc6c" containerName="util" Jan 26 13:14:01 crc kubenswrapper[4844]: E0126 13:14:01.189536 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fcada7-92af-4edd-903e-8706cffecc6c" containerName="extract" Jan 26 13:14:01 crc kubenswrapper[4844]: I0126 13:14:01.189547 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fcada7-92af-4edd-903e-8706cffecc6c" containerName="extract" Jan 26 13:14:01 crc kubenswrapper[4844]: I0126 13:14:01.189823 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="22fcada7-92af-4edd-903e-8706cffecc6c" containerName="extract" Jan 26 13:14:01 crc kubenswrapper[4844]: I0126 13:14:01.190489 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-54d8cfbbfb-9bfgj" Jan 26 13:14:01 crc kubenswrapper[4844]: I0126 13:14:01.192410 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-bp448" Jan 26 13:14:01 crc kubenswrapper[4844]: I0126 13:14:01.219842 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-54d8cfbbfb-9bfgj"] Jan 26 13:14:01 crc kubenswrapper[4844]: I0126 13:14:01.279085 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pblvh\" (UniqueName: \"kubernetes.io/projected/d2118529-9df3-486e-9f15-3a54c55d9eb1-kube-api-access-pblvh\") pod \"openstack-operator-controller-init-54d8cfbbfb-9bfgj\" (UID: \"d2118529-9df3-486e-9f15-3a54c55d9eb1\") " pod="openstack-operators/openstack-operator-controller-init-54d8cfbbfb-9bfgj" Jan 26 13:14:01 crc kubenswrapper[4844]: I0126 13:14:01.381062 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pblvh\" (UniqueName: \"kubernetes.io/projected/d2118529-9df3-486e-9f15-3a54c55d9eb1-kube-api-access-pblvh\") pod \"openstack-operator-controller-init-54d8cfbbfb-9bfgj\" (UID: \"d2118529-9df3-486e-9f15-3a54c55d9eb1\") " pod="openstack-operators/openstack-operator-controller-init-54d8cfbbfb-9bfgj" Jan 26 13:14:01 crc kubenswrapper[4844]: I0126 13:14:01.415652 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pblvh\" (UniqueName: \"kubernetes.io/projected/d2118529-9df3-486e-9f15-3a54c55d9eb1-kube-api-access-pblvh\") pod \"openstack-operator-controller-init-54d8cfbbfb-9bfgj\" (UID: \"d2118529-9df3-486e-9f15-3a54c55d9eb1\") " pod="openstack-operators/openstack-operator-controller-init-54d8cfbbfb-9bfgj" Jan 26 13:14:01 crc kubenswrapper[4844]: I0126 13:14:01.508699 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-54d8cfbbfb-9bfgj" Jan 26 13:14:01 crc kubenswrapper[4844]: I0126 13:14:01.772270 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-54d8cfbbfb-9bfgj"] Jan 26 13:14:01 crc kubenswrapper[4844]: W0126 13:14:01.787162 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2118529_9df3_486e_9f15_3a54c55d9eb1.slice/crio-9832532cca18e6506f8c18d5c0c9a6fc1aa971b3a3df905afc215437f7294f22 WatchSource:0}: Error finding container 9832532cca18e6506f8c18d5c0c9a6fc1aa971b3a3df905afc215437f7294f22: Status 404 returned error can't find the container with id 9832532cca18e6506f8c18d5c0c9a6fc1aa971b3a3df905afc215437f7294f22 Jan 26 13:14:02 crc kubenswrapper[4844]: I0126 13:14:02.421279 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-54d8cfbbfb-9bfgj" event={"ID":"d2118529-9df3-486e-9f15-3a54c55d9eb1","Type":"ContainerStarted","Data":"9832532cca18e6506f8c18d5c0c9a6fc1aa971b3a3df905afc215437f7294f22"} Jan 26 13:14:04 crc kubenswrapper[4844]: I0126 13:14:04.313930 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:14:04 crc kubenswrapper[4844]: E0126 13:14:04.314647 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:14:08 crc kubenswrapper[4844]: I0126 13:14:08.486802 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-54d8cfbbfb-9bfgj" event={"ID":"d2118529-9df3-486e-9f15-3a54c55d9eb1","Type":"ContainerStarted","Data":"1fe0a099040eaa15fa7b6261d6beec8bed3cb385ec756e2e86afd0f79fcea1b6"} Jan 26 13:14:08 crc kubenswrapper[4844]: I0126 13:14:08.487347 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-54d8cfbbfb-9bfgj" Jan 26 13:14:15 crc kubenswrapper[4844]: I0126 13:14:15.313798 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:14:15 crc kubenswrapper[4844]: E0126 13:14:15.315445 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:14:21 crc kubenswrapper[4844]: I0126 13:14:21.512635 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-54d8cfbbfb-9bfgj" Jan 26 13:14:21 crc kubenswrapper[4844]: I0126 13:14:21.565769 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-54d8cfbbfb-9bfgj" podStartSLOduration=14.76207829 podStartE2EDuration="20.56574447s" podCreationTimestamp="2026-01-26 13:14:01 +0000 UTC" firstStartedPulling="2026-01-26 13:14:01.789698859 +0000 UTC m=+1818.723066471" lastFinishedPulling="2026-01-26 13:14:07.593365029 +0000 UTC m=+1824.526732651" observedRunningTime="2026-01-26 13:14:08.512743856 +0000 UTC m=+1825.446111468" watchObservedRunningTime="2026-01-26 13:14:21.56574447 +0000 UTC m=+1838.499112122" Jan 26 13:14:27 crc kubenswrapper[4844]: I0126 13:14:27.313387 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:14:27 crc kubenswrapper[4844]: E0126 13:14:27.314135 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:14:39 crc kubenswrapper[4844]: I0126 13:14:39.313806 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:14:39 crc kubenswrapper[4844]: I0126 13:14:39.725163 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"f8d2dd6bfcc6d48828fccc89734d561f1977038b1d62b9cafb05ed3131eb3a4b"} Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.062008 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-sm4lj"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.063184 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-sm4lj" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.069738 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-8jp5z" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.077749 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-5tq86"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.078608 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5tq86" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.081937 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-sm4lj"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.087369 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-v5cvq" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.093730 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-gmfsm"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.094550 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmfsm" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.098703 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-gjrwv" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.102900 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-5tq86"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.119210 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-mwszm"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.120017 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mwszm" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.122002 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-9wx5p" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.140841 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-gmfsm"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.142694 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxpwd\" (UniqueName: \"kubernetes.io/projected/aa463929-97db-4af2-8308-840d51ae717a-kube-api-access-cxpwd\") pod \"cinder-operator-controller-manager-7478f7dbf9-sm4lj\" (UID: \"aa463929-97db-4af2-8308-840d51ae717a\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-sm4lj" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.142756 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zgnr\" (UniqueName: \"kubernetes.io/projected/f8b1471a-3483-4c9e-b662-02906d9b18c0-kube-api-access-8zgnr\") pod \"glance-operator-controller-manager-78fdd796fd-mwszm\" (UID: \"f8b1471a-3483-4c9e-b662-02906d9b18c0\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mwszm" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.142826 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-986bv\" (UniqueName: \"kubernetes.io/projected/c39cee42-2147-463f-90f5-62b0ad31ec96-kube-api-access-986bv\") pod \"designate-operator-controller-manager-b45d7bf98-gmfsm\" (UID: \"c39cee42-2147-463f-90f5-62b0ad31ec96\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmfsm" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.142858 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxjzj\" (UniqueName: \"kubernetes.io/projected/a29e2eac-c303-4ae6-9c3b-439a258ce420-kube-api-access-fxjzj\") pod \"barbican-operator-controller-manager-7f86f8796f-5tq86\" (UID: \"a29e2eac-c303-4ae6-9c3b-439a258ce420\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5tq86" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.156651 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-k8f6n"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.157468 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k8f6n" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.165611 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-q569f" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.171643 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-mwszm"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.182658 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-k8f6n"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.192696 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rk7rt"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.193502 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rk7rt" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.196468 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-xhscc" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.222408 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rk7rt"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.231231 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.232218 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.234281 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4bzjc" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.234512 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.243712 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zgnr\" (UniqueName: \"kubernetes.io/projected/f8b1471a-3483-4c9e-b662-02906d9b18c0-kube-api-access-8zgnr\") pod \"glance-operator-controller-manager-78fdd796fd-mwszm\" (UID: \"f8b1471a-3483-4c9e-b662-02906d9b18c0\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mwszm" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.243769 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc276\" (UniqueName: \"kubernetes.io/projected/981956b6-e5c7-4908-a72d-458026f29e4d-kube-api-access-rc276\") pod \"horizon-operator-controller-manager-77d5c5b54f-rk7rt\" (UID: \"981956b6-e5c7-4908-a72d-458026f29e4d\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rk7rt" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.243795 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq2c2\" (UniqueName: \"kubernetes.io/projected/9de97e7e-c381-4f7d-9380-9aadf848b3a6-kube-api-access-vq2c2\") pod \"heat-operator-controller-manager-594c8c9d5d-k8f6n\" (UID: \"9de97e7e-c381-4f7d-9380-9aadf848b3a6\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k8f6n" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.243838 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-986bv\" (UniqueName: \"kubernetes.io/projected/c39cee42-2147-463f-90f5-62b0ad31ec96-kube-api-access-986bv\") pod \"designate-operator-controller-manager-b45d7bf98-gmfsm\" (UID: \"c39cee42-2147-463f-90f5-62b0ad31ec96\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmfsm" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.243865 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxjzj\" (UniqueName: \"kubernetes.io/projected/a29e2eac-c303-4ae6-9c3b-439a258ce420-kube-api-access-fxjzj\") pod \"barbican-operator-controller-manager-7f86f8796f-5tq86\" (UID: \"a29e2eac-c303-4ae6-9c3b-439a258ce420\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5tq86" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.243899 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxpwd\" (UniqueName: \"kubernetes.io/projected/aa463929-97db-4af2-8308-840d51ae717a-kube-api-access-cxpwd\") pod \"cinder-operator-controller-manager-7478f7dbf9-sm4lj\" (UID: \"aa463929-97db-4af2-8308-840d51ae717a\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-sm4lj" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.247900 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-krn66"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.248806 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-krn66" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.252939 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.262456 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-zrt66" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.268716 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-ht7r9"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.269650 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ht7r9" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.271926 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zgnr\" (UniqueName: \"kubernetes.io/projected/f8b1471a-3483-4c9e-b662-02906d9b18c0-kube-api-access-8zgnr\") pod \"glance-operator-controller-manager-78fdd796fd-mwszm\" (UID: \"f8b1471a-3483-4c9e-b662-02906d9b18c0\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mwszm" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.272304 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-mv786" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.282352 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxpwd\" (UniqueName: \"kubernetes.io/projected/aa463929-97db-4af2-8308-840d51ae717a-kube-api-access-cxpwd\") pod \"cinder-operator-controller-manager-7478f7dbf9-sm4lj\" (UID: \"aa463929-97db-4af2-8308-840d51ae717a\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-sm4lj" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.285084 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxjzj\" (UniqueName: \"kubernetes.io/projected/a29e2eac-c303-4ae6-9c3b-439a258ce420-kube-api-access-fxjzj\") pod \"barbican-operator-controller-manager-7f86f8796f-5tq86\" (UID: \"a29e2eac-c303-4ae6-9c3b-439a258ce420\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5tq86" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.285590 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-986bv\" (UniqueName: \"kubernetes.io/projected/c39cee42-2147-463f-90f5-62b0ad31ec96-kube-api-access-986bv\") pod \"designate-operator-controller-manager-b45d7bf98-gmfsm\" (UID: \"c39cee42-2147-463f-90f5-62b0ad31ec96\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmfsm" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.307101 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-krn66"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.327615 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-wtp6f"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.328347 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wtp6f" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.333208 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-ht7r9"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.340375 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-wtp6f"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.346036 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc276\" (UniqueName: \"kubernetes.io/projected/981956b6-e5c7-4908-a72d-458026f29e4d-kube-api-access-rc276\") pod \"horizon-operator-controller-manager-77d5c5b54f-rk7rt\" (UID: \"981956b6-e5c7-4908-a72d-458026f29e4d\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rk7rt" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.346076 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq2c2\" (UniqueName: \"kubernetes.io/projected/9de97e7e-c381-4f7d-9380-9aadf848b3a6-kube-api-access-vq2c2\") pod \"heat-operator-controller-manager-594c8c9d5d-k8f6n\" (UID: \"9de97e7e-c381-4f7d-9380-9aadf848b3a6\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k8f6n" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.348246 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-wwdv7" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.360960 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-bcdf4"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.361755 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-bcdf4" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.367954 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-bcdf4"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.371060 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-fmb78" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.375193 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc276\" (UniqueName: \"kubernetes.io/projected/981956b6-e5c7-4908-a72d-458026f29e4d-kube-api-access-rc276\") pod \"horizon-operator-controller-manager-77d5c5b54f-rk7rt\" (UID: \"981956b6-e5c7-4908-a72d-458026f29e4d\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rk7rt" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.376174 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-pffmq"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.376947 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pffmq" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.377235 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq2c2\" (UniqueName: \"kubernetes.io/projected/9de97e7e-c381-4f7d-9380-9aadf848b3a6-kube-api-access-vq2c2\") pod \"heat-operator-controller-manager-594c8c9d5d-k8f6n\" (UID: \"9de97e7e-c381-4f7d-9380-9aadf848b3a6\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k8f6n" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.380325 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-sm4lj" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.381248 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-hlk8x" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.395808 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5tq86" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.420411 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-x5shx"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.420917 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmfsm" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.443661 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mwszm" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.454699 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-pffmq"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.458311 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbs8d\" (UniqueName: \"kubernetes.io/projected/8b9f2639-4aaa-463a-b950-fc39fca31805-kube-api-access-lbs8d\") pod \"infra-operator-controller-manager-694cf4f878-vzncj\" (UID: \"8b9f2639-4aaa-463a-b950-fc39fca31805\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.458793 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kddd9\" (UniqueName: \"kubernetes.io/projected/a60ef848-810d-4c2c-8c23-341d8168e7e7-kube-api-access-kddd9\") pod \"keystone-operator-controller-manager-b8b6d4659-ht7r9\" (UID: \"a60ef848-810d-4c2c-8c23-341d8168e7e7\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ht7r9" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.458992 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssdxq\" (UniqueName: \"kubernetes.io/projected/2a343b60-ecc4-4634-9a54-7814555dd3bc-kube-api-access-ssdxq\") pod \"manila-operator-controller-manager-78c6999f6f-wtp6f\" (UID: \"2a343b60-ecc4-4634-9a54-7814555dd3bc\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wtp6f" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.459257 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vsf4\" (UniqueName: \"kubernetes.io/projected/1eca115f-b8cd-4a50-8adc-2d31e297657f-kube-api-access-7vsf4\") pod \"ironic-operator-controller-manager-598f7747c9-krn66\" (UID: \"1eca115f-b8cd-4a50-8adc-2d31e297657f\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-krn66" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.460244 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b9f2639-4aaa-463a-b950-fc39fca31805-cert\") pod \"infra-operator-controller-manager-694cf4f878-vzncj\" (UID: \"8b9f2639-4aaa-463a-b950-fc39fca31805\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.459977 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-x5shx" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.462318 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-4vq5t" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.462508 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-x5shx"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.466467 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-566vm"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.471701 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-566vm" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.474384 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-ck2bp" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.480382 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-566vm"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.483756 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k8f6n" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.486946 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-l7w8f"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.487862 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l7w8f" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.489441 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-m7rg6" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.503803 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.513159 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-l7w8f"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.513477 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.514851 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rk7rt" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.518290 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-scdj6" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.518477 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.518589 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-mkcr9"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.519555 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mkcr9" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.521335 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-vpc9v" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.524733 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.530314 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-mkcr9"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.542897 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-88kvh"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.543897 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-88kvh" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.547175 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-s4578" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.548369 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-88kvh"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.561893 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29d75\" (UniqueName: \"kubernetes.io/projected/154eb771-ca89-43f9-b002-e6f11d943cbe-kube-api-access-29d75\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-bcdf4\" (UID: \"154eb771-ca89-43f9-b002-e6f11d943cbe\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-bcdf4" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.561939 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t82q\" (UniqueName: \"kubernetes.io/projected/4bf529eb-b7b9-4ca7-a55a-73fd7d58ac81-kube-api-access-6t82q\") pod \"octavia-operator-controller-manager-5f4cd88d46-566vm\" (UID: \"4bf529eb-b7b9-4ca7-a55a-73fd7d58ac81\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-566vm" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.561964 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssdxq\" (UniqueName: \"kubernetes.io/projected/2a343b60-ecc4-4634-9a54-7814555dd3bc-kube-api-access-ssdxq\") pod \"manila-operator-controller-manager-78c6999f6f-wtp6f\" (UID: \"2a343b60-ecc4-4634-9a54-7814555dd3bc\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wtp6f" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.561999 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prjm2\" (UniqueName: \"kubernetes.io/projected/00b0af83-1dea-44ab-b074-fa7b5c9cf46d-kube-api-access-prjm2\") pod \"swift-operator-controller-manager-547cbdb99f-88kvh\" (UID: \"00b0af83-1dea-44ab-b074-fa7b5c9cf46d\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-88kvh" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.562018 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vsf4\" (UniqueName: \"kubernetes.io/projected/1eca115f-b8cd-4a50-8adc-2d31e297657f-kube-api-access-7vsf4\") pod \"ironic-operator-controller-manager-598f7747c9-krn66\" (UID: \"1eca115f-b8cd-4a50-8adc-2d31e297657f\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-krn66" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.562040 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swjm6\" (UniqueName: \"kubernetes.io/projected/73721700-0f73-468c-9c69-2d3f078a7516-kube-api-access-swjm6\") pod \"nova-operator-controller-manager-7bdb645866-x5shx\" (UID: \"73721700-0f73-468c-9c69-2d3f078a7516\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-x5shx" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.562063 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b9f2639-4aaa-463a-b950-fc39fca31805-cert\") pod \"infra-operator-controller-manager-694cf4f878-vzncj\" (UID: \"8b9f2639-4aaa-463a-b950-fc39fca31805\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.562089 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndkmp\" (UniqueName: \"kubernetes.io/projected/89ab862c-0d6a-4a44-9f28-9195e0213328-kube-api-access-ndkmp\") pod \"ovn-operator-controller-manager-6f75f45d54-l7w8f\" (UID: \"89ab862c-0d6a-4a44-9f28-9195e0213328\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l7w8f" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.562117 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7z44\" (UniqueName: \"kubernetes.io/projected/8ac12453-5418-4c50-8b2a-61dfad6bf1e1-kube-api-access-n7z44\") pod \"neutron-operator-controller-manager-78d58447c5-pffmq\" (UID: \"8ac12453-5418-4c50-8b2a-61dfad6bf1e1\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pffmq" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.562149 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbs8d\" (UniqueName: \"kubernetes.io/projected/8b9f2639-4aaa-463a-b950-fc39fca31805-kube-api-access-lbs8d\") pod \"infra-operator-controller-manager-694cf4f878-vzncj\" (UID: \"8b9f2639-4aaa-463a-b950-fc39fca31805\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.562167 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc92h\" (UniqueName: \"kubernetes.io/projected/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-kube-api-access-wc92h\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85478v8f\" (UID: \"12e4b3b0-81a4-4752-8cea-e1a3178d38ba\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.562185 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kddd9\" (UniqueName: \"kubernetes.io/projected/a60ef848-810d-4c2c-8c23-341d8168e7e7-kube-api-access-kddd9\") pod \"keystone-operator-controller-manager-b8b6d4659-ht7r9\" (UID: \"a60ef848-810d-4c2c-8c23-341d8168e7e7\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ht7r9" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.562202 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85478v8f\" (UID: \"12e4b3b0-81a4-4752-8cea-e1a3178d38ba\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.562218 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfpm7\" (UniqueName: \"kubernetes.io/projected/3a13e1fa-35b1-4adc-a21d-a09aa4ec91a7-kube-api-access-qfpm7\") pod \"placement-operator-controller-manager-79d5ccc684-mkcr9\" (UID: \"3a13e1fa-35b1-4adc-a21d-a09aa4ec91a7\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mkcr9" Jan 26 13:14:41 crc kubenswrapper[4844]: E0126 13:14:41.562681 4844 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 13:14:41 crc kubenswrapper[4844]: E0126 13:14:41.562744 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b9f2639-4aaa-463a-b950-fc39fca31805-cert podName:8b9f2639-4aaa-463a-b950-fc39fca31805 nodeName:}" failed. No retries permitted until 2026-01-26 13:14:42.06272872 +0000 UTC m=+1858.996096332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b9f2639-4aaa-463a-b950-fc39fca31805-cert") pod "infra-operator-controller-manager-694cf4f878-vzncj" (UID: "8b9f2639-4aaa-463a-b950-fc39fca31805") : secret "infra-operator-webhook-server-cert" not found Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.575705 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fj29j"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.576819 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fj29j" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.578437 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-hclmk" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.582235 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fj29j"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.583160 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssdxq\" (UniqueName: \"kubernetes.io/projected/2a343b60-ecc4-4634-9a54-7814555dd3bc-kube-api-access-ssdxq\") pod \"manila-operator-controller-manager-78c6999f6f-wtp6f\" (UID: \"2a343b60-ecc4-4634-9a54-7814555dd3bc\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wtp6f" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.584539 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbs8d\" (UniqueName: \"kubernetes.io/projected/8b9f2639-4aaa-463a-b950-fc39fca31805-kube-api-access-lbs8d\") pod \"infra-operator-controller-manager-694cf4f878-vzncj\" (UID: \"8b9f2639-4aaa-463a-b950-fc39fca31805\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.587304 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kddd9\" (UniqueName: \"kubernetes.io/projected/a60ef848-810d-4c2c-8c23-341d8168e7e7-kube-api-access-kddd9\") pod \"keystone-operator-controller-manager-b8b6d4659-ht7r9\" (UID: \"a60ef848-810d-4c2c-8c23-341d8168e7e7\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ht7r9" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.587832 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vsf4\" (UniqueName: \"kubernetes.io/projected/1eca115f-b8cd-4a50-8adc-2d31e297657f-kube-api-access-7vsf4\") pod \"ironic-operator-controller-manager-598f7747c9-krn66\" (UID: \"1eca115f-b8cd-4a50-8adc-2d31e297657f\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-krn66" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.613294 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-dgglg"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.614519 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dgglg" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.616265 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cps5l" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.627759 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-dgglg"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.652225 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ht7r9" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.664031 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7z44\" (UniqueName: \"kubernetes.io/projected/8ac12453-5418-4c50-8b2a-61dfad6bf1e1-kube-api-access-n7z44\") pod \"neutron-operator-controller-manager-78d58447c5-pffmq\" (UID: \"8ac12453-5418-4c50-8b2a-61dfad6bf1e1\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pffmq" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.664107 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc92h\" (UniqueName: \"kubernetes.io/projected/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-kube-api-access-wc92h\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85478v8f\" (UID: \"12e4b3b0-81a4-4752-8cea-e1a3178d38ba\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.664156 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfpm7\" (UniqueName: \"kubernetes.io/projected/3a13e1fa-35b1-4adc-a21d-a09aa4ec91a7-kube-api-access-qfpm7\") pod \"placement-operator-controller-manager-79d5ccc684-mkcr9\" (UID: \"3a13e1fa-35b1-4adc-a21d-a09aa4ec91a7\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mkcr9" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.664174 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85478v8f\" (UID: \"12e4b3b0-81a4-4752-8cea-e1a3178d38ba\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.664223 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29d75\" (UniqueName: \"kubernetes.io/projected/154eb771-ca89-43f9-b002-e6f11d943cbe-kube-api-access-29d75\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-bcdf4\" (UID: \"154eb771-ca89-43f9-b002-e6f11d943cbe\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-bcdf4" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.664245 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t82q\" (UniqueName: \"kubernetes.io/projected/4bf529eb-b7b9-4ca7-a55a-73fd7d58ac81-kube-api-access-6t82q\") pod \"octavia-operator-controller-manager-5f4cd88d46-566vm\" (UID: \"4bf529eb-b7b9-4ca7-a55a-73fd7d58ac81\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-566vm" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.664279 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9mr7\" (UniqueName: \"kubernetes.io/projected/915eea77-c5eb-4e5c-b9f2-404ba732dac8-kube-api-access-d9mr7\") pod \"test-operator-controller-manager-69797bbcbd-dgglg\" (UID: \"915eea77-c5eb-4e5c-b9f2-404ba732dac8\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dgglg" Jan 26 13:14:41 crc kubenswrapper[4844]: E0126 13:14:41.666313 4844 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 13:14:41 crc kubenswrapper[4844]: E0126 13:14:41.666363 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert podName:12e4b3b0-81a4-4752-8cea-e1a3178d38ba nodeName:}" failed. No retries permitted until 2026-01-26 13:14:42.166348366 +0000 UTC m=+1859.099715978 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" (UID: "12e4b3b0-81a4-4752-8cea-e1a3178d38ba") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.667039 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prjm2\" (UniqueName: \"kubernetes.io/projected/00b0af83-1dea-44ab-b074-fa7b5c9cf46d-kube-api-access-prjm2\") pod \"swift-operator-controller-manager-547cbdb99f-88kvh\" (UID: \"00b0af83-1dea-44ab-b074-fa7b5c9cf46d\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-88kvh" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.667101 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swjm6\" (UniqueName: \"kubernetes.io/projected/73721700-0f73-468c-9c69-2d3f078a7516-kube-api-access-swjm6\") pod \"nova-operator-controller-manager-7bdb645866-x5shx\" (UID: \"73721700-0f73-468c-9c69-2d3f078a7516\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-x5shx" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.667164 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndkmp\" (UniqueName: \"kubernetes.io/projected/89ab862c-0d6a-4a44-9f28-9195e0213328-kube-api-access-ndkmp\") pod \"ovn-operator-controller-manager-6f75f45d54-l7w8f\" (UID: \"89ab862c-0d6a-4a44-9f28-9195e0213328\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l7w8f" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.667199 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pkf2\" (UniqueName: \"kubernetes.io/projected/9fb0454b-90d4-48f3-b069-86aada20e9f9-kube-api-access-7pkf2\") pod \"telemetry-operator-controller-manager-85cd9769bb-fj29j\" (UID: \"9fb0454b-90d4-48f3-b069-86aada20e9f9\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fj29j" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.674027 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wtp6f" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.700732 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prjm2\" (UniqueName: \"kubernetes.io/projected/00b0af83-1dea-44ab-b074-fa7b5c9cf46d-kube-api-access-prjm2\") pod \"swift-operator-controller-manager-547cbdb99f-88kvh\" (UID: \"00b0af83-1dea-44ab-b074-fa7b5c9cf46d\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-88kvh" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.700838 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7z44\" (UniqueName: \"kubernetes.io/projected/8ac12453-5418-4c50-8b2a-61dfad6bf1e1-kube-api-access-n7z44\") pod \"neutron-operator-controller-manager-78d58447c5-pffmq\" (UID: \"8ac12453-5418-4c50-8b2a-61dfad6bf1e1\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pffmq" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.707651 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t82q\" (UniqueName: \"kubernetes.io/projected/4bf529eb-b7b9-4ca7-a55a-73fd7d58ac81-kube-api-access-6t82q\") pod \"octavia-operator-controller-manager-5f4cd88d46-566vm\" (UID: \"4bf529eb-b7b9-4ca7-a55a-73fd7d58ac81\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-566vm" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.707885 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc92h\" (UniqueName: \"kubernetes.io/projected/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-kube-api-access-wc92h\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85478v8f\" (UID: \"12e4b3b0-81a4-4752-8cea-e1a3178d38ba\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.708843 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndkmp\" (UniqueName: \"kubernetes.io/projected/89ab862c-0d6a-4a44-9f28-9195e0213328-kube-api-access-ndkmp\") pod \"ovn-operator-controller-manager-6f75f45d54-l7w8f\" (UID: \"89ab862c-0d6a-4a44-9f28-9195e0213328\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l7w8f" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.710583 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfpm7\" (UniqueName: \"kubernetes.io/projected/3a13e1fa-35b1-4adc-a21d-a09aa4ec91a7-kube-api-access-qfpm7\") pod \"placement-operator-controller-manager-79d5ccc684-mkcr9\" (UID: \"3a13e1fa-35b1-4adc-a21d-a09aa4ec91a7\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mkcr9" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.719695 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swjm6\" (UniqueName: \"kubernetes.io/projected/73721700-0f73-468c-9c69-2d3f078a7516-kube-api-access-swjm6\") pod \"nova-operator-controller-manager-7bdb645866-x5shx\" (UID: \"73721700-0f73-468c-9c69-2d3f078a7516\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-x5shx" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.719715 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29d75\" (UniqueName: \"kubernetes.io/projected/154eb771-ca89-43f9-b002-e6f11d943cbe-kube-api-access-29d75\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-bcdf4\" (UID: \"154eb771-ca89-43f9-b002-e6f11d943cbe\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-bcdf4" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.730754 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5fc5788b68-9qjpz"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.731720 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5fc5788b68-9qjpz" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.736379 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-mrpht" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.744283 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5fc5788b68-9qjpz"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.768568 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-bcdf4" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.769430 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcv4t\" (UniqueName: \"kubernetes.io/projected/c74ba998-8b13-4a63-a4b3-d027f70ff41d-kube-api-access-xcv4t\") pod \"watcher-operator-controller-manager-5fc5788b68-9qjpz\" (UID: \"c74ba998-8b13-4a63-a4b3-d027f70ff41d\") " pod="openstack-operators/watcher-operator-controller-manager-5fc5788b68-9qjpz" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.769509 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9mr7\" (UniqueName: \"kubernetes.io/projected/915eea77-c5eb-4e5c-b9f2-404ba732dac8-kube-api-access-d9mr7\") pod \"test-operator-controller-manager-69797bbcbd-dgglg\" (UID: \"915eea77-c5eb-4e5c-b9f2-404ba732dac8\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dgglg" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.769574 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pkf2\" (UniqueName: \"kubernetes.io/projected/9fb0454b-90d4-48f3-b069-86aada20e9f9-kube-api-access-7pkf2\") pod \"telemetry-operator-controller-manager-85cd9769bb-fj29j\" (UID: \"9fb0454b-90d4-48f3-b069-86aada20e9f9\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fj29j" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.785569 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pffmq" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.795620 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-5tq86"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.796315 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-x5shx" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.802225 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9mr7\" (UniqueName: \"kubernetes.io/projected/915eea77-c5eb-4e5c-b9f2-404ba732dac8-kube-api-access-d9mr7\") pod \"test-operator-controller-manager-69797bbcbd-dgglg\" (UID: \"915eea77-c5eb-4e5c-b9f2-404ba732dac8\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dgglg" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.802931 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pkf2\" (UniqueName: \"kubernetes.io/projected/9fb0454b-90d4-48f3-b069-86aada20e9f9-kube-api-access-7pkf2\") pod \"telemetry-operator-controller-manager-85cd9769bb-fj29j\" (UID: \"9fb0454b-90d4-48f3-b069-86aada20e9f9\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fj29j" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.805047 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.809430 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.809560 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-566vm" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.816077 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.816153 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.816996 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-8t9gl" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.823984 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.825761 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l7w8f" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.873739 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcv4t\" (UniqueName: \"kubernetes.io/projected/c74ba998-8b13-4a63-a4b3-d027f70ff41d-kube-api-access-xcv4t\") pod \"watcher-operator-controller-manager-5fc5788b68-9qjpz\" (UID: \"c74ba998-8b13-4a63-a4b3-d027f70ff41d\") " pod="openstack-operators/watcher-operator-controller-manager-5fc5788b68-9qjpz" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.874628 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-krn66" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.880882 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8s4vt"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.882699 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8s4vt" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.890892 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-mlr4f" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.891995 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8s4vt"] Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.895815 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcv4t\" (UniqueName: \"kubernetes.io/projected/c74ba998-8b13-4a63-a4b3-d027f70ff41d-kube-api-access-xcv4t\") pod \"watcher-operator-controller-manager-5fc5788b68-9qjpz\" (UID: \"c74ba998-8b13-4a63-a4b3-d027f70ff41d\") " pod="openstack-operators/watcher-operator-controller-manager-5fc5788b68-9qjpz" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.913983 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mkcr9" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.929153 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-88kvh" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.933139 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.955515 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fj29j" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.964556 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dgglg" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.975225 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-webhook-certs\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.975286 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-metrics-certs\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:41 crc kubenswrapper[4844]: I0126 13:14:41.975362 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7hp6\" (UniqueName: \"kubernetes.io/projected/dd52b1ad-222e-4b57-91e0-869bd8094adc-kube-api-access-g7hp6\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.068993 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5fc5788b68-9qjpz" Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.076493 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45xxs\" (UniqueName: \"kubernetes.io/projected/e99dde4f-0ab1-45ad-b6c0-e5225fbfc77d-kube-api-access-45xxs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8s4vt\" (UID: \"e99dde4f-0ab1-45ad-b6c0-e5225fbfc77d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8s4vt" Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.076732 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7hp6\" (UniqueName: \"kubernetes.io/projected/dd52b1ad-222e-4b57-91e0-869bd8094adc-kube-api-access-g7hp6\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.077232 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-webhook-certs\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.077263 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b9f2639-4aaa-463a-b950-fc39fca31805-cert\") pod \"infra-operator-controller-manager-694cf4f878-vzncj\" (UID: \"8b9f2639-4aaa-463a-b950-fc39fca31805\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj" Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.077287 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-metrics-certs\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.077439 4844 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.077483 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-metrics-certs podName:dd52b1ad-222e-4b57-91e0-869bd8094adc nodeName:}" failed. No retries permitted until 2026-01-26 13:14:42.5774694 +0000 UTC m=+1859.510837002 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-metrics-certs") pod "openstack-operator-controller-manager-6b75585dc8-tzrcv" (UID: "dd52b1ad-222e-4b57-91e0-869bd8094adc") : secret "metrics-server-cert" not found Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.077949 4844 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.077980 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b9f2639-4aaa-463a-b950-fc39fca31805-cert podName:8b9f2639-4aaa-463a-b950-fc39fca31805 nodeName:}" failed. No retries permitted until 2026-01-26 13:14:43.077971692 +0000 UTC m=+1860.011339304 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b9f2639-4aaa-463a-b950-fc39fca31805-cert") pod "infra-operator-controller-manager-694cf4f878-vzncj" (UID: "8b9f2639-4aaa-463a-b950-fc39fca31805") : secret "infra-operator-webhook-server-cert" not found Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.078038 4844 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.078118 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-webhook-certs podName:dd52b1ad-222e-4b57-91e0-869bd8094adc nodeName:}" failed. No retries permitted until 2026-01-26 13:14:42.578085534 +0000 UTC m=+1859.511453146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-webhook-certs") pod "openstack-operator-controller-manager-6b75585dc8-tzrcv" (UID: "dd52b1ad-222e-4b57-91e0-869bd8094adc") : secret "webhook-server-cert" not found Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.105481 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7hp6\" (UniqueName: \"kubernetes.io/projected/dd52b1ad-222e-4b57-91e0-869bd8094adc-kube-api-access-g7hp6\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.125972 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-k8f6n"] Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.155009 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-gmfsm"] Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.174317 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-mwszm"] Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.179105 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85478v8f\" (UID: \"12e4b3b0-81a4-4752-8cea-e1a3178d38ba\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.179157 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45xxs\" (UniqueName: \"kubernetes.io/projected/e99dde4f-0ab1-45ad-b6c0-e5225fbfc77d-kube-api-access-45xxs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8s4vt\" (UID: \"e99dde4f-0ab1-45ad-b6c0-e5225fbfc77d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8s4vt" Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.179611 4844 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.179654 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert podName:12e4b3b0-81a4-4752-8cea-e1a3178d38ba nodeName:}" failed. No retries permitted until 2026-01-26 13:14:43.179639661 +0000 UTC m=+1860.113007273 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" (UID: "12e4b3b0-81a4-4752-8cea-e1a3178d38ba") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.180055 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-sm4lj"] Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.197173 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45xxs\" (UniqueName: \"kubernetes.io/projected/e99dde4f-0ab1-45ad-b6c0-e5225fbfc77d-kube-api-access-45xxs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8s4vt\" (UID: \"e99dde4f-0ab1-45ad-b6c0-e5225fbfc77d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8s4vt" Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.207906 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8s4vt" Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.243586 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rk7rt"] Jan 26 13:14:42 crc kubenswrapper[4844]: W0126 13:14:42.266082 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod981956b6_e5c7_4908_a72d_458026f29e4d.slice/crio-0cf8704c9bceeb77f86f4252db416e2d67f330a9279595fa6096dea0f98b65bf WatchSource:0}: Error finding container 0cf8704c9bceeb77f86f4252db416e2d67f330a9279595fa6096dea0f98b65bf: Status 404 returned error can't find the container with id 0cf8704c9bceeb77f86f4252db416e2d67f330a9279595fa6096dea0f98b65bf Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.322750 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-ht7r9"] Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.411471 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-wtp6f"] Jan 26 13:14:42 crc kubenswrapper[4844]: W0126 13:14:42.418907 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a343b60_ecc4_4634_9a54_7814555dd3bc.slice/crio-0ff3ac188ba185a7acc7b14e493e0cc46600d8a1ecab4684e7557b6060827e2e WatchSource:0}: Error finding container 0ff3ac188ba185a7acc7b14e493e0cc46600d8a1ecab4684e7557b6060827e2e: Status 404 returned error can't find the container with id 0ff3ac188ba185a7acc7b14e493e0cc46600d8a1ecab4684e7557b6060827e2e Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.434212 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-bcdf4"] Jan 26 13:14:42 crc kubenswrapper[4844]: W0126 13:14:42.444258 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod154eb771_ca89_43f9_b002_e6f11d943cbe.slice/crio-02ead84e5e33315c27af6e20fe285a63e8acd67f16a897ec285880f9e4d3d65a WatchSource:0}: Error finding container 02ead84e5e33315c27af6e20fe285a63e8acd67f16a897ec285880f9e4d3d65a: Status 404 returned error can't find the container with id 02ead84e5e33315c27af6e20fe285a63e8acd67f16a897ec285880f9e4d3d65a Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.487942 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-pffmq"] Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.585308 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-webhook-certs\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.585425 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-metrics-certs\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.585494 4844 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.585588 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-webhook-certs podName:dd52b1ad-222e-4b57-91e0-869bd8094adc nodeName:}" failed. No retries permitted until 2026-01-26 13:14:43.585562659 +0000 UTC m=+1860.518930361 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-webhook-certs") pod "openstack-operator-controller-manager-6b75585dc8-tzrcv" (UID: "dd52b1ad-222e-4b57-91e0-869bd8094adc") : secret "webhook-server-cert" not found Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.585629 4844 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.585703 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-metrics-certs podName:dd52b1ad-222e-4b57-91e0-869bd8094adc nodeName:}" failed. No retries permitted until 2026-01-26 13:14:43.585681333 +0000 UTC m=+1860.519048985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-metrics-certs") pod "openstack-operator-controller-manager-6b75585dc8-tzrcv" (UID: "dd52b1ad-222e-4b57-91e0-869bd8094adc") : secret "metrics-server-cert" not found Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.683337 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-x5shx"] Jan 26 13:14:42 crc kubenswrapper[4844]: W0126 13:14:42.691359 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bf529eb_b7b9_4ca7_a55a_73fd7d58ac81.slice/crio-2be100bcce84a8ddf0edc333ec67b8d42214faa53c20bd49ed7c4a7190bae56e WatchSource:0}: Error finding container 2be100bcce84a8ddf0edc333ec67b8d42214faa53c20bd49ed7c4a7190bae56e: Status 404 returned error can't find the container with id 2be100bcce84a8ddf0edc333ec67b8d42214faa53c20bd49ed7c4a7190bae56e Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.695772 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-566vm"] Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.703430 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-l7w8f"] Jan 26 13:14:42 crc kubenswrapper[4844]: W0126 13:14:42.706746 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73721700_0f73_468c_9c69_2d3f078a7516.slice/crio-9a1f9cd6a9bcd1e9d3898a02e2946e6ae4bdae823cb6aeb169e9412075bdedb1 WatchSource:0}: Error finding container 9a1f9cd6a9bcd1e9d3898a02e2946e6ae4bdae823cb6aeb169e9412075bdedb1: Status 404 returned error can't find the container with id 9a1f9cd6a9bcd1e9d3898a02e2946e6ae4bdae823cb6aeb169e9412075bdedb1 Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.709826 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-krn66"] Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.715648 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ndkmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-l7w8f_openstack-operators(89ab862c-0d6a-4a44-9f28-9195e0213328): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.717112 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l7w8f" podUID="89ab862c-0d6a-4a44-9f28-9195e0213328" Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.747002 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k8f6n" event={"ID":"9de97e7e-c381-4f7d-9380-9aadf848b3a6","Type":"ContainerStarted","Data":"10557bb191e60b662d1974094fc3d7bba310dee07e0f15051bbae4d97ac007c0"} Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.748035 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-x5shx" event={"ID":"73721700-0f73-468c-9c69-2d3f078a7516","Type":"ContainerStarted","Data":"9a1f9cd6a9bcd1e9d3898a02e2946e6ae4bdae823cb6aeb169e9412075bdedb1"} Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.749049 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wtp6f" event={"ID":"2a343b60-ecc4-4634-9a54-7814555dd3bc","Type":"ContainerStarted","Data":"0ff3ac188ba185a7acc7b14e493e0cc46600d8a1ecab4684e7557b6060827e2e"} Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.750096 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l7w8f" event={"ID":"89ab862c-0d6a-4a44-9f28-9195e0213328","Type":"ContainerStarted","Data":"33dc5268274ef60729db4af5f15fd4026fba904da0979f93f3328c24c7c39119"} Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.752212 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l7w8f" podUID="89ab862c-0d6a-4a44-9f28-9195e0213328" Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.755953 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pffmq" event={"ID":"8ac12453-5418-4c50-8b2a-61dfad6bf1e1","Type":"ContainerStarted","Data":"472aff9c65f0b285f9ee90eb8c52b1dbeff45b004f3d8ae62f845c37b9f1b8a0"} Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.759229 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5tq86" event={"ID":"a29e2eac-c303-4ae6-9c3b-439a258ce420","Type":"ContainerStarted","Data":"ab8dc31c4788e1f553e6a8ea8bedfedf1ac08c838690c84803ae1bf4cb87b843"} Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.760499 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-566vm" event={"ID":"4bf529eb-b7b9-4ca7-a55a-73fd7d58ac81","Type":"ContainerStarted","Data":"2be100bcce84a8ddf0edc333ec67b8d42214faa53c20bd49ed7c4a7190bae56e"} Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.762062 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-bcdf4" event={"ID":"154eb771-ca89-43f9-b002-e6f11d943cbe","Type":"ContainerStarted","Data":"02ead84e5e33315c27af6e20fe285a63e8acd67f16a897ec285880f9e4d3d65a"} Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.762899 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmfsm" event={"ID":"c39cee42-2147-463f-90f5-62b0ad31ec96","Type":"ContainerStarted","Data":"171a86bbd0899c00fac31ff7eb3d09fe29fc33d9ca6fdb07292805cfac6e3a49"} Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.763564 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-sm4lj" event={"ID":"aa463929-97db-4af2-8308-840d51ae717a","Type":"ContainerStarted","Data":"357466b2a0f3915e9653de5bcf050336265eee5f33119405ad02de2273e9f6e2"} Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.764312 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rk7rt" event={"ID":"981956b6-e5c7-4908-a72d-458026f29e4d","Type":"ContainerStarted","Data":"0cf8704c9bceeb77f86f4252db416e2d67f330a9279595fa6096dea0f98b65bf"} Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.766452 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mwszm" event={"ID":"f8b1471a-3483-4c9e-b662-02906d9b18c0","Type":"ContainerStarted","Data":"7655c5984ce8637bcd9c5dbc51b82f064aede8b86ab1fbdd8d39e717d7dae14e"} Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.767377 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ht7r9" event={"ID":"a60ef848-810d-4c2c-8c23-341d8168e7e7","Type":"ContainerStarted","Data":"f2fc603e7b93a2edd266b88fd919118382fe2af4cac1fcbbc9efb2279e34ae5e"} Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.768270 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-krn66" event={"ID":"1eca115f-b8cd-4a50-8adc-2d31e297657f","Type":"ContainerStarted","Data":"2160c854955916a55eeb026e04bca94d9c94614fab5d494decb3e18141b056e9"} Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.788363 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7pkf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-fj29j_openstack-operators(9fb0454b-90d4-48f3-b069-86aada20e9f9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.789859 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fj29j" podUID="9fb0454b-90d4-48f3-b069-86aada20e9f9" Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.789872 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fj29j"] Jan 26 13:14:42 crc kubenswrapper[4844]: W0126 13:14:42.795740 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00b0af83_1dea_44ab_b074_fa7b5c9cf46d.slice/crio-99e1efd7d6aa1cad8f41f3749e8f11cb7f75b06e94c3273e22d1a05d9125c205 WatchSource:0}: Error finding container 99e1efd7d6aa1cad8f41f3749e8f11cb7f75b06e94c3273e22d1a05d9125c205: Status 404 returned error can't find the container with id 99e1efd7d6aa1cad8f41f3749e8f11cb7f75b06e94c3273e22d1a05d9125c205 Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.796412 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-88kvh"] Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.798707 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-prjm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-88kvh_openstack-operators(00b0af83-1dea-44ab-b074-fa7b5c9cf46d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 13:14:42 crc kubenswrapper[4844]: W0126 13:14:42.798824 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a13e1fa_35b1_4adc_a21d_a09aa4ec91a7.slice/crio-5ba4719edc6c1e0e2af25bdd4ce64303d2bc988699879950e8ddb77fef54049c WatchSource:0}: Error finding container 5ba4719edc6c1e0e2af25bdd4ce64303d2bc988699879950e8ddb77fef54049c: Status 404 returned error can't find the container with id 5ba4719edc6c1e0e2af25bdd4ce64303d2bc988699879950e8ddb77fef54049c Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.799925 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-88kvh" podUID="00b0af83-1dea-44ab-b074-fa7b5c9cf46d" Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.803810 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-mkcr9"] Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.877726 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8s4vt"] Jan 26 13:14:42 crc kubenswrapper[4844]: W0126 13:14:42.881701 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode99dde4f_0ab1_45ad_b6c0_e5225fbfc77d.slice/crio-122b76a1fefe1387a32885a82a41b095fdbc9a9e608f7c037e207060d745241a WatchSource:0}: Error finding container 122b76a1fefe1387a32885a82a41b095fdbc9a9e608f7c037e207060d745241a: Status 404 returned error can't find the container with id 122b76a1fefe1387a32885a82a41b095fdbc9a9e608f7c037e207060d745241a Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.882912 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-dgglg"] Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.884315 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-45xxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-8s4vt_openstack-operators(e99dde4f-0ab1-45ad-b6c0-e5225fbfc77d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 13:14:42 crc kubenswrapper[4844]: W0126 13:14:42.884901 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc74ba998_8b13_4a63_a4b3_d027f70ff41d.slice/crio-168bfee2d949ca0d936ce7516ed3261fadb78ad248ea10b2e4ad0dff110124bb WatchSource:0}: Error finding container 168bfee2d949ca0d936ce7516ed3261fadb78ad248ea10b2e4ad0dff110124bb: Status 404 returned error can't find the container with id 168bfee2d949ca0d936ce7516ed3261fadb78ad248ea10b2e4ad0dff110124bb Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.885393 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8s4vt" podUID="e99dde4f-0ab1-45ad-b6c0-e5225fbfc77d" Jan 26 13:14:42 crc kubenswrapper[4844]: W0126 13:14:42.886588 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod915eea77_c5eb_4e5c_b9f2_404ba732dac8.slice/crio-a1a72f4f0fc4d6bb0fb101ad90b629740c0a9ce202c6c4c0f603f53ba555b4f4 WatchSource:0}: Error finding container a1a72f4f0fc4d6bb0fb101ad90b629740c0a9ce202c6c4c0f603f53ba555b4f4: Status 404 returned error can't find the container with id a1a72f4f0fc4d6bb0fb101ad90b629740c0a9ce202c6c4c0f603f53ba555b4f4 Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.887307 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.9:5001/openstack-k8s-operators/watcher-operator:add353f857c04debbf620f926c6c19f4f45c7f75,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xcv4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5fc5788b68-9qjpz_openstack-operators(c74ba998-8b13-4a63-a4b3-d027f70ff41d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 13:14:42 crc kubenswrapper[4844]: I0126 13:14:42.887353 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5fc5788b68-9qjpz"] Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.889357 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5fc5788b68-9qjpz" podUID="c74ba998-8b13-4a63-a4b3-d027f70ff41d" Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.890827 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d9mr7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-dgglg_openstack-operators(915eea77-c5eb-4e5c-b9f2-404ba732dac8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 13:14:42 crc kubenswrapper[4844]: E0126 13:14:42.892316 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dgglg" podUID="915eea77-c5eb-4e5c-b9f2-404ba732dac8" Jan 26 13:14:43 crc kubenswrapper[4844]: I0126 13:14:43.092220 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b9f2639-4aaa-463a-b950-fc39fca31805-cert\") pod \"infra-operator-controller-manager-694cf4f878-vzncj\" (UID: \"8b9f2639-4aaa-463a-b950-fc39fca31805\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj" Jan 26 13:14:43 crc kubenswrapper[4844]: E0126 13:14:43.092440 4844 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 13:14:43 crc kubenswrapper[4844]: E0126 13:14:43.092522 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b9f2639-4aaa-463a-b950-fc39fca31805-cert podName:8b9f2639-4aaa-463a-b950-fc39fca31805 nodeName:}" failed. No retries permitted until 2026-01-26 13:14:45.092503032 +0000 UTC m=+1862.025870724 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b9f2639-4aaa-463a-b950-fc39fca31805-cert") pod "infra-operator-controller-manager-694cf4f878-vzncj" (UID: "8b9f2639-4aaa-463a-b950-fc39fca31805") : secret "infra-operator-webhook-server-cert" not found Jan 26 13:14:43 crc kubenswrapper[4844]: I0126 13:14:43.193441 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85478v8f\" (UID: \"12e4b3b0-81a4-4752-8cea-e1a3178d38ba\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" Jan 26 13:14:43 crc kubenswrapper[4844]: E0126 13:14:43.193586 4844 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 13:14:43 crc kubenswrapper[4844]: E0126 13:14:43.204732 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert podName:12e4b3b0-81a4-4752-8cea-e1a3178d38ba nodeName:}" failed. No retries permitted until 2026-01-26 13:14:45.204705924 +0000 UTC m=+1862.138073536 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" (UID: "12e4b3b0-81a4-4752-8cea-e1a3178d38ba") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 13:14:43 crc kubenswrapper[4844]: I0126 13:14:43.637107 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-webhook-certs\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:43 crc kubenswrapper[4844]: I0126 13:14:43.637458 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-metrics-certs\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:43 crc kubenswrapper[4844]: E0126 13:14:43.637251 4844 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 13:14:43 crc kubenswrapper[4844]: E0126 13:14:43.637548 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-webhook-certs podName:dd52b1ad-222e-4b57-91e0-869bd8094adc nodeName:}" failed. No retries permitted until 2026-01-26 13:14:45.637533848 +0000 UTC m=+1862.570901450 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-webhook-certs") pod "openstack-operator-controller-manager-6b75585dc8-tzrcv" (UID: "dd52b1ad-222e-4b57-91e0-869bd8094adc") : secret "webhook-server-cert" not found Jan 26 13:14:43 crc kubenswrapper[4844]: E0126 13:14:43.637677 4844 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 13:14:43 crc kubenswrapper[4844]: E0126 13:14:43.637774 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-metrics-certs podName:dd52b1ad-222e-4b57-91e0-869bd8094adc nodeName:}" failed. No retries permitted until 2026-01-26 13:14:45.637742283 +0000 UTC m=+1862.571109895 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-metrics-certs") pod "openstack-operator-controller-manager-6b75585dc8-tzrcv" (UID: "dd52b1ad-222e-4b57-91e0-869bd8094adc") : secret "metrics-server-cert" not found Jan 26 13:14:43 crc kubenswrapper[4844]: I0126 13:14:43.779340 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5fc5788b68-9qjpz" event={"ID":"c74ba998-8b13-4a63-a4b3-d027f70ff41d","Type":"ContainerStarted","Data":"168bfee2d949ca0d936ce7516ed3261fadb78ad248ea10b2e4ad0dff110124bb"} Jan 26 13:14:43 crc kubenswrapper[4844]: E0126 13:14:43.781564 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/openstack-k8s-operators/watcher-operator:add353f857c04debbf620f926c6c19f4f45c7f75\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5fc5788b68-9qjpz" podUID="c74ba998-8b13-4a63-a4b3-d027f70ff41d" Jan 26 13:14:43 crc kubenswrapper[4844]: I0126 13:14:43.783434 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8s4vt" event={"ID":"e99dde4f-0ab1-45ad-b6c0-e5225fbfc77d","Type":"ContainerStarted","Data":"122b76a1fefe1387a32885a82a41b095fdbc9a9e608f7c037e207060d745241a"} Jan 26 13:14:43 crc kubenswrapper[4844]: E0126 13:14:43.784651 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8s4vt" podUID="e99dde4f-0ab1-45ad-b6c0-e5225fbfc77d" Jan 26 13:14:43 crc kubenswrapper[4844]: I0126 13:14:43.787361 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fj29j" event={"ID":"9fb0454b-90d4-48f3-b069-86aada20e9f9","Type":"ContainerStarted","Data":"5c152f2282297cc1634084da6941f7e7c233b1ebc8192ea4ebbd61440cba2103"} Jan 26 13:14:43 crc kubenswrapper[4844]: E0126 13:14:43.788812 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fj29j" podUID="9fb0454b-90d4-48f3-b069-86aada20e9f9" Jan 26 13:14:43 crc kubenswrapper[4844]: I0126 13:14:43.790883 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-88kvh" event={"ID":"00b0af83-1dea-44ab-b074-fa7b5c9cf46d","Type":"ContainerStarted","Data":"99e1efd7d6aa1cad8f41f3749e8f11cb7f75b06e94c3273e22d1a05d9125c205"} Jan 26 13:14:43 crc kubenswrapper[4844]: I0126 13:14:43.797078 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mkcr9" event={"ID":"3a13e1fa-35b1-4adc-a21d-a09aa4ec91a7","Type":"ContainerStarted","Data":"5ba4719edc6c1e0e2af25bdd4ce64303d2bc988699879950e8ddb77fef54049c"} Jan 26 13:14:43 crc kubenswrapper[4844]: I0126 13:14:43.799201 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dgglg" event={"ID":"915eea77-c5eb-4e5c-b9f2-404ba732dac8","Type":"ContainerStarted","Data":"a1a72f4f0fc4d6bb0fb101ad90b629740c0a9ce202c6c4c0f603f53ba555b4f4"} Jan 26 13:14:43 crc kubenswrapper[4844]: E0126 13:14:43.800286 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-88kvh" podUID="00b0af83-1dea-44ab-b074-fa7b5c9cf46d" Jan 26 13:14:43 crc kubenswrapper[4844]: E0126 13:14:43.801273 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l7w8f" podUID="89ab862c-0d6a-4a44-9f28-9195e0213328" Jan 26 13:14:43 crc kubenswrapper[4844]: E0126 13:14:43.803717 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dgglg" podUID="915eea77-c5eb-4e5c-b9f2-404ba732dac8" Jan 26 13:14:44 crc kubenswrapper[4844]: E0126 13:14:44.810189 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fj29j" podUID="9fb0454b-90d4-48f3-b069-86aada20e9f9" Jan 26 13:14:44 crc kubenswrapper[4844]: E0126 13:14:44.812063 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dgglg" podUID="915eea77-c5eb-4e5c-b9f2-404ba732dac8" Jan 26 13:14:44 crc kubenswrapper[4844]: E0126 13:14:44.813254 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8s4vt" podUID="e99dde4f-0ab1-45ad-b6c0-e5225fbfc77d" Jan 26 13:14:44 crc kubenswrapper[4844]: E0126 13:14:44.813255 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-88kvh" podUID="00b0af83-1dea-44ab-b074-fa7b5c9cf46d" Jan 26 13:14:44 crc kubenswrapper[4844]: E0126 13:14:44.813280 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/openstack-k8s-operators/watcher-operator:add353f857c04debbf620f926c6c19f4f45c7f75\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5fc5788b68-9qjpz" podUID="c74ba998-8b13-4a63-a4b3-d027f70ff41d" Jan 26 13:14:45 crc kubenswrapper[4844]: I0126 13:14:45.175273 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b9f2639-4aaa-463a-b950-fc39fca31805-cert\") pod \"infra-operator-controller-manager-694cf4f878-vzncj\" (UID: \"8b9f2639-4aaa-463a-b950-fc39fca31805\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj" Jan 26 13:14:45 crc kubenswrapper[4844]: E0126 13:14:45.175456 4844 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 13:14:45 crc kubenswrapper[4844]: E0126 13:14:45.175538 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b9f2639-4aaa-463a-b950-fc39fca31805-cert podName:8b9f2639-4aaa-463a-b950-fc39fca31805 nodeName:}" failed. No retries permitted until 2026-01-26 13:14:49.175516926 +0000 UTC m=+1866.108884528 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b9f2639-4aaa-463a-b950-fc39fca31805-cert") pod "infra-operator-controller-manager-694cf4f878-vzncj" (UID: "8b9f2639-4aaa-463a-b950-fc39fca31805") : secret "infra-operator-webhook-server-cert" not found Jan 26 13:14:45 crc kubenswrapper[4844]: I0126 13:14:45.276758 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85478v8f\" (UID: \"12e4b3b0-81a4-4752-8cea-e1a3178d38ba\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" Jan 26 13:14:45 crc kubenswrapper[4844]: E0126 13:14:45.276906 4844 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 13:14:45 crc kubenswrapper[4844]: E0126 13:14:45.277006 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert podName:12e4b3b0-81a4-4752-8cea-e1a3178d38ba nodeName:}" failed. No retries permitted until 2026-01-26 13:14:49.276988551 +0000 UTC m=+1866.210356163 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" (UID: "12e4b3b0-81a4-4752-8cea-e1a3178d38ba") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 13:14:45 crc kubenswrapper[4844]: I0126 13:14:45.689641 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-webhook-certs\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:45 crc kubenswrapper[4844]: I0126 13:14:45.689721 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-metrics-certs\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:45 crc kubenswrapper[4844]: E0126 13:14:45.689826 4844 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 13:14:45 crc kubenswrapper[4844]: E0126 13:14:45.689922 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-webhook-certs podName:dd52b1ad-222e-4b57-91e0-869bd8094adc nodeName:}" failed. No retries permitted until 2026-01-26 13:14:49.689899508 +0000 UTC m=+1866.623267220 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-webhook-certs") pod "openstack-operator-controller-manager-6b75585dc8-tzrcv" (UID: "dd52b1ad-222e-4b57-91e0-869bd8094adc") : secret "webhook-server-cert" not found Jan 26 13:14:45 crc kubenswrapper[4844]: E0126 13:14:45.689845 4844 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 13:14:45 crc kubenswrapper[4844]: E0126 13:14:45.690031 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-metrics-certs podName:dd52b1ad-222e-4b57-91e0-869bd8094adc nodeName:}" failed. No retries permitted until 2026-01-26 13:14:49.69001554 +0000 UTC m=+1866.623383152 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-metrics-certs") pod "openstack-operator-controller-manager-6b75585dc8-tzrcv" (UID: "dd52b1ad-222e-4b57-91e0-869bd8094adc") : secret "metrics-server-cert" not found Jan 26 13:14:49 crc kubenswrapper[4844]: I0126 13:14:49.253002 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b9f2639-4aaa-463a-b950-fc39fca31805-cert\") pod \"infra-operator-controller-manager-694cf4f878-vzncj\" (UID: \"8b9f2639-4aaa-463a-b950-fc39fca31805\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj" Jan 26 13:14:49 crc kubenswrapper[4844]: E0126 13:14:49.253300 4844 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 13:14:49 crc kubenswrapper[4844]: E0126 13:14:49.253986 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b9f2639-4aaa-463a-b950-fc39fca31805-cert podName:8b9f2639-4aaa-463a-b950-fc39fca31805 nodeName:}" failed. No retries permitted until 2026-01-26 13:14:57.253964075 +0000 UTC m=+1874.187331717 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b9f2639-4aaa-463a-b950-fc39fca31805-cert") pod "infra-operator-controller-manager-694cf4f878-vzncj" (UID: "8b9f2639-4aaa-463a-b950-fc39fca31805") : secret "infra-operator-webhook-server-cert" not found Jan 26 13:14:49 crc kubenswrapper[4844]: I0126 13:14:49.355501 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85478v8f\" (UID: \"12e4b3b0-81a4-4752-8cea-e1a3178d38ba\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" Jan 26 13:14:49 crc kubenswrapper[4844]: E0126 13:14:49.355697 4844 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 13:14:49 crc kubenswrapper[4844]: E0126 13:14:49.355778 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert podName:12e4b3b0-81a4-4752-8cea-e1a3178d38ba nodeName:}" failed. No retries permitted until 2026-01-26 13:14:57.355756827 +0000 UTC m=+1874.289124439 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" (UID: "12e4b3b0-81a4-4752-8cea-e1a3178d38ba") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 13:14:49 crc kubenswrapper[4844]: I0126 13:14:49.764715 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-webhook-certs\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:49 crc kubenswrapper[4844]: I0126 13:14:49.765236 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-metrics-certs\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:49 crc kubenswrapper[4844]: E0126 13:14:49.765374 4844 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 13:14:49 crc kubenswrapper[4844]: E0126 13:14:49.765452 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-webhook-certs podName:dd52b1ad-222e-4b57-91e0-869bd8094adc nodeName:}" failed. No retries permitted until 2026-01-26 13:14:57.765434735 +0000 UTC m=+1874.698802347 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-webhook-certs") pod "openstack-operator-controller-manager-6b75585dc8-tzrcv" (UID: "dd52b1ad-222e-4b57-91e0-869bd8094adc") : secret "webhook-server-cert" not found Jan 26 13:14:49 crc kubenswrapper[4844]: E0126 13:14:49.765556 4844 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 13:14:49 crc kubenswrapper[4844]: E0126 13:14:49.765641 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-metrics-certs podName:dd52b1ad-222e-4b57-91e0-869bd8094adc nodeName:}" failed. No retries permitted until 2026-01-26 13:14:57.765626001 +0000 UTC m=+1874.698993713 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-metrics-certs") pod "openstack-operator-controller-manager-6b75585dc8-tzrcv" (UID: "dd52b1ad-222e-4b57-91e0-869bd8094adc") : secret "metrics-server-cert" not found Jan 26 13:14:56 crc kubenswrapper[4844]: E0126 13:14:56.276498 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 26 13:14:56 crc kubenswrapper[4844]: E0126 13:14:56.276913 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-swjm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-x5shx_openstack-operators(73721700-0f73-468c-9c69-2d3f078a7516): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:14:56 crc kubenswrapper[4844]: E0126 13:14:56.278064 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-x5shx" podUID="73721700-0f73-468c-9c69-2d3f078a7516" Jan 26 13:14:56 crc kubenswrapper[4844]: E0126 13:14:56.901483 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-x5shx" podUID="73721700-0f73-468c-9c69-2d3f078a7516" Jan 26 13:14:57 crc kubenswrapper[4844]: E0126 13:14:57.027727 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 26 13:14:57 crc kubenswrapper[4844]: E0126 13:14:57.027938 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kddd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-ht7r9_openstack-operators(a60ef848-810d-4c2c-8c23-341d8168e7e7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:14:57 crc kubenswrapper[4844]: E0126 13:14:57.030646 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ht7r9" podUID="a60ef848-810d-4c2c-8c23-341d8168e7e7" Jan 26 13:14:57 crc kubenswrapper[4844]: I0126 13:14:57.280094 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b9f2639-4aaa-463a-b950-fc39fca31805-cert\") pod \"infra-operator-controller-manager-694cf4f878-vzncj\" (UID: \"8b9f2639-4aaa-463a-b950-fc39fca31805\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj" Jan 26 13:14:57 crc kubenswrapper[4844]: I0126 13:14:57.302898 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b9f2639-4aaa-463a-b950-fc39fca31805-cert\") pod \"infra-operator-controller-manager-694cf4f878-vzncj\" (UID: \"8b9f2639-4aaa-463a-b950-fc39fca31805\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj" Jan 26 13:14:57 crc kubenswrapper[4844]: I0126 13:14:57.381985 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85478v8f\" (UID: \"12e4b3b0-81a4-4752-8cea-e1a3178d38ba\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" Jan 26 13:14:57 crc kubenswrapper[4844]: E0126 13:14:57.382139 4844 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 13:14:57 crc kubenswrapper[4844]: E0126 13:14:57.382198 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert podName:12e4b3b0-81a4-4752-8cea-e1a3178d38ba nodeName:}" failed. No retries permitted until 2026-01-26 13:15:13.382183783 +0000 UTC m=+1890.315551395 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" (UID: "12e4b3b0-81a4-4752-8cea-e1a3178d38ba") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 13:14:57 crc kubenswrapper[4844]: I0126 13:14:57.465997 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4bzjc" Jan 26 13:14:57 crc kubenswrapper[4844]: I0126 13:14:57.475357 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj" Jan 26 13:14:57 crc kubenswrapper[4844]: I0126 13:14:57.788615 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-webhook-certs\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:57 crc kubenswrapper[4844]: I0126 13:14:57.788699 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-metrics-certs\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:57 crc kubenswrapper[4844]: I0126 13:14:57.795609 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-metrics-certs\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:57 crc kubenswrapper[4844]: I0126 13:14:57.797151 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dd52b1ad-222e-4b57-91e0-869bd8094adc-webhook-certs\") pod \"openstack-operator-controller-manager-6b75585dc8-tzrcv\" (UID: \"dd52b1ad-222e-4b57-91e0-869bd8094adc\") " pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:57 crc kubenswrapper[4844]: E0126 13:14:57.910506 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ht7r9" podUID="a60ef848-810d-4c2c-8c23-341d8168e7e7" Jan 26 13:14:58 crc kubenswrapper[4844]: I0126 13:14:58.041322 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-8t9gl" Jan 26 13:14:58 crc kubenswrapper[4844]: I0126 13:14:58.049975 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:14:59 crc kubenswrapper[4844]: I0126 13:14:59.515507 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv"] Jan 26 13:15:00 crc kubenswrapper[4844]: I0126 13:15:00.139399 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js"] Jan 26 13:15:00 crc kubenswrapper[4844]: I0126 13:15:00.140560 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js" Jan 26 13:15:00 crc kubenswrapper[4844]: I0126 13:15:00.143350 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 13:15:00 crc kubenswrapper[4844]: I0126 13:15:00.143764 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 13:15:00 crc kubenswrapper[4844]: I0126 13:15:00.146990 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js"] Jan 26 13:15:00 crc kubenswrapper[4844]: I0126 13:15:00.325049 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ad5427-9763-4ad8-81c9-557978090fbc-config-volume\") pod \"collect-profiles-29490555-7t4js\" (UID: \"90ad5427-9763-4ad8-81c9-557978090fbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js" Jan 26 13:15:00 crc kubenswrapper[4844]: I0126 13:15:00.325354 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnsks\" (UniqueName: \"kubernetes.io/projected/90ad5427-9763-4ad8-81c9-557978090fbc-kube-api-access-rnsks\") pod \"collect-profiles-29490555-7t4js\" (UID: \"90ad5427-9763-4ad8-81c9-557978090fbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js" Jan 26 13:15:00 crc kubenswrapper[4844]: I0126 13:15:00.325409 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ad5427-9763-4ad8-81c9-557978090fbc-secret-volume\") pod \"collect-profiles-29490555-7t4js\" (UID: \"90ad5427-9763-4ad8-81c9-557978090fbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js" Jan 26 13:15:00 crc kubenswrapper[4844]: I0126 13:15:00.426927 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnsks\" (UniqueName: \"kubernetes.io/projected/90ad5427-9763-4ad8-81c9-557978090fbc-kube-api-access-rnsks\") pod \"collect-profiles-29490555-7t4js\" (UID: \"90ad5427-9763-4ad8-81c9-557978090fbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js" Jan 26 13:15:00 crc kubenswrapper[4844]: I0126 13:15:00.427034 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ad5427-9763-4ad8-81c9-557978090fbc-secret-volume\") pod \"collect-profiles-29490555-7t4js\" (UID: \"90ad5427-9763-4ad8-81c9-557978090fbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js" Jan 26 13:15:00 crc kubenswrapper[4844]: I0126 13:15:00.427105 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ad5427-9763-4ad8-81c9-557978090fbc-config-volume\") pod \"collect-profiles-29490555-7t4js\" (UID: \"90ad5427-9763-4ad8-81c9-557978090fbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js" Jan 26 13:15:00 crc kubenswrapper[4844]: I0126 13:15:00.428043 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ad5427-9763-4ad8-81c9-557978090fbc-config-volume\") pod \"collect-profiles-29490555-7t4js\" (UID: \"90ad5427-9763-4ad8-81c9-557978090fbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js" Jan 26 13:15:00 crc kubenswrapper[4844]: I0126 13:15:00.437355 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ad5427-9763-4ad8-81c9-557978090fbc-secret-volume\") pod \"collect-profiles-29490555-7t4js\" (UID: \"90ad5427-9763-4ad8-81c9-557978090fbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js" Jan 26 13:15:00 crc kubenswrapper[4844]: I0126 13:15:00.444211 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnsks\" (UniqueName: \"kubernetes.io/projected/90ad5427-9763-4ad8-81c9-557978090fbc-kube-api-access-rnsks\") pod \"collect-profiles-29490555-7t4js\" (UID: \"90ad5427-9763-4ad8-81c9-557978090fbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js" Jan 26 13:15:00 crc kubenswrapper[4844]: I0126 13:15:00.469186 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js" Jan 26 13:15:06 crc kubenswrapper[4844]: I0126 13:15:06.974533 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" event={"ID":"dd52b1ad-222e-4b57-91e0-869bd8094adc","Type":"ContainerStarted","Data":"4a703c2ed98cf5eeddd417256dd7067547a828c3a3ac196c3acdb7034f701fb7"} Jan 26 13:15:09 crc kubenswrapper[4844]: E0126 13:15:09.191884 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 26 13:15:09 crc kubenswrapper[4844]: E0126 13:15:09.192042 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-45xxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-8s4vt_openstack-operators(e99dde4f-0ab1-45ad-b6c0-e5225fbfc77d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:15:09 crc kubenswrapper[4844]: E0126 13:15:09.193333 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8s4vt" podUID="e99dde4f-0ab1-45ad-b6c0-e5225fbfc77d" Jan 26 13:15:09 crc kubenswrapper[4844]: E0126 13:15:09.853557 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 26 13:15:09 crc kubenswrapper[4844]: E0126 13:15:09.853914 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7pkf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-fj29j_openstack-operators(9fb0454b-90d4-48f3-b069-86aada20e9f9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:15:09 crc kubenswrapper[4844]: E0126 13:15:09.855099 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fj29j" podUID="9fb0454b-90d4-48f3-b069-86aada20e9f9" Jan 26 13:15:10 crc kubenswrapper[4844]: I0126 13:15:10.266897 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj"] Jan 26 13:15:10 crc kubenswrapper[4844]: W0126 13:15:10.502048 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b9f2639_4aaa_463a_b950_fc39fca31805.slice/crio-7f91c1889b92fb18276f65cd8145aaa86273010f2439e6db0c209688bf40a6ea WatchSource:0}: Error finding container 7f91c1889b92fb18276f65cd8145aaa86273010f2439e6db0c209688bf40a6ea: Status 404 returned error can't find the container with id 7f91c1889b92fb18276f65cd8145aaa86273010f2439e6db0c209688bf40a6ea Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.019166 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mkcr9" event={"ID":"3a13e1fa-35b1-4adc-a21d-a09aa4ec91a7","Type":"ContainerStarted","Data":"5beca1b022fe13300519dac85e0ee859c59ff5244956d432e046f9b72490c927"} Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.019515 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mkcr9" Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.025409 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-566vm" event={"ID":"4bf529eb-b7b9-4ca7-a55a-73fd7d58ac81","Type":"ContainerStarted","Data":"7d7b2e83edc3ef2ee15e5ba68ce04473847cebb2bcd9b927596c9f970bb8da36"} Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.025640 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-566vm" Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.027766 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js"] Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.032641 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmfsm" event={"ID":"c39cee42-2147-463f-90f5-62b0ad31ec96","Type":"ContainerStarted","Data":"46a9030074780d036d4bd8c7182fff5bfd81cfe182b0bddf84787527711d6271"} Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.033229 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmfsm" Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.034179 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj" event={"ID":"8b9f2639-4aaa-463a-b950-fc39fca31805","Type":"ContainerStarted","Data":"7f91c1889b92fb18276f65cd8145aaa86273010f2439e6db0c209688bf40a6ea"} Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.035444 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5tq86" event={"ID":"a29e2eac-c303-4ae6-9c3b-439a258ce420","Type":"ContainerStarted","Data":"4ea61e83ff56fb49d591c8496eb6917506342d05c0e55e8843195f7191990f46"} Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.036205 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5tq86" Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.045227 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-krn66" Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.050822 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k8f6n" event={"ID":"9de97e7e-c381-4f7d-9380-9aadf848b3a6","Type":"ContainerStarted","Data":"012146058a7862429eaa46036151d10845b1747033852be615ef60dd82b5bcfe"} Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.051436 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k8f6n" Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.052408 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mkcr9" podStartSLOduration=14.466313611 podStartE2EDuration="30.052391975s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.802004182 +0000 UTC m=+1859.735371794" lastFinishedPulling="2026-01-26 13:14:58.388082516 +0000 UTC m=+1875.321450158" observedRunningTime="2026-01-26 13:15:11.048071061 +0000 UTC m=+1887.981438693" watchObservedRunningTime="2026-01-26 13:15:11.052391975 +0000 UTC m=+1887.985759587" Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.076641 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmfsm" podStartSLOduration=15.302454181 podStartE2EDuration="30.076623646s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.219912817 +0000 UTC m=+1859.153280429" lastFinishedPulling="2026-01-26 13:14:56.994082282 +0000 UTC m=+1873.927449894" observedRunningTime="2026-01-26 13:15:11.071980394 +0000 UTC m=+1888.005348006" watchObservedRunningTime="2026-01-26 13:15:11.076623646 +0000 UTC m=+1888.009991258" Jan 26 13:15:11 crc kubenswrapper[4844]: W0126 13:15:11.079502 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90ad5427_9763_4ad8_81c9_557978090fbc.slice/crio-88d85306e708258571254e32e1e175f9223d14a12cde8480f7be904a48adbf96 WatchSource:0}: Error finding container 88d85306e708258571254e32e1e175f9223d14a12cde8480f7be904a48adbf96: Status 404 returned error can't find the container with id 88d85306e708258571254e32e1e175f9223d14a12cde8480f7be904a48adbf96 Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.108926 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-krn66" podStartSLOduration=14.478059403 podStartE2EDuration="30.10890767s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.71477173 +0000 UTC m=+1859.648139342" lastFinishedPulling="2026-01-26 13:14:58.345619997 +0000 UTC m=+1875.278987609" observedRunningTime="2026-01-26 13:15:11.102972908 +0000 UTC m=+1888.036340520" watchObservedRunningTime="2026-01-26 13:15:11.10890767 +0000 UTC m=+1888.042275282" Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.131231 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-566vm" podStartSLOduration=14.43752716 podStartE2EDuration="30.131216665s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.69519195 +0000 UTC m=+1859.628559562" lastFinishedPulling="2026-01-26 13:14:58.388881425 +0000 UTC m=+1875.322249067" observedRunningTime="2026-01-26 13:15:11.125608121 +0000 UTC m=+1888.058975743" watchObservedRunningTime="2026-01-26 13:15:11.131216665 +0000 UTC m=+1888.064584267" Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.200746 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k8f6n" podStartSLOduration=15.427122521 podStartE2EDuration="30.200731673s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.229694461 +0000 UTC m=+1859.163062073" lastFinishedPulling="2026-01-26 13:14:57.003303603 +0000 UTC m=+1873.936671225" observedRunningTime="2026-01-26 13:15:11.198226673 +0000 UTC m=+1888.131594285" watchObservedRunningTime="2026-01-26 13:15:11.200731673 +0000 UTC m=+1888.134099285" Jan 26 13:15:11 crc kubenswrapper[4844]: I0126 13:15:11.202776 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5tq86" podStartSLOduration=15.900835417 podStartE2EDuration="30.202753002s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:41.932888181 +0000 UTC m=+1858.866255793" lastFinishedPulling="2026-01-26 13:14:56.234805766 +0000 UTC m=+1873.168173378" observedRunningTime="2026-01-26 13:15:11.14641861 +0000 UTC m=+1888.079786222" watchObservedRunningTime="2026-01-26 13:15:11.202753002 +0000 UTC m=+1888.136120614" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.066898 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-x5shx" event={"ID":"73721700-0f73-468c-9c69-2d3f078a7516","Type":"ContainerStarted","Data":"c91df5ae03181b8a25536f5c734eb3152e0599ed7e3afb122510a398479522b7"} Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.067895 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-x5shx" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.075844 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5fc5788b68-9qjpz" event={"ID":"c74ba998-8b13-4a63-a4b3-d027f70ff41d","Type":"ContainerStarted","Data":"ee70b6c5672e567dddefe6975c5b546ceb5cf07fc4152377371afa1d657c1865"} Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.076433 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5fc5788b68-9qjpz" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.085581 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-sm4lj" event={"ID":"aa463929-97db-4af2-8308-840d51ae717a","Type":"ContainerStarted","Data":"c260a3bbee683b938646dfdef6a05e4d09e3a3a99c004804a27ef25dad5669f3"} Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.095563 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js" event={"ID":"90ad5427-9763-4ad8-81c9-557978090fbc","Type":"ContainerStarted","Data":"45390072dfb01c4be7a1919aa93b0635d4251eab817368449799b3b552c48972"} Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.095705 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js" event={"ID":"90ad5427-9763-4ad8-81c9-557978090fbc","Type":"ContainerStarted","Data":"88d85306e708258571254e32e1e175f9223d14a12cde8480f7be904a48adbf96"} Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.097400 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-bcdf4" event={"ID":"154eb771-ca89-43f9-b002-e6f11d943cbe","Type":"ContainerStarted","Data":"4a4e808c5fa6b93a997cb4136adb3d8a1ac789553313dcaff098d42055b4782b"} Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.097699 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-bcdf4" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.098533 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-88kvh" event={"ID":"00b0af83-1dea-44ab-b074-fa7b5c9cf46d","Type":"ContainerStarted","Data":"eb9a2c4cd257a02b36172cb78880e0e7a77ac12128955340164d186a12a5bb92"} Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.099154 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-88kvh" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.103902 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dgglg" event={"ID":"915eea77-c5eb-4e5c-b9f2-404ba732dac8","Type":"ContainerStarted","Data":"d0bce0cf86b581b5ced876f8e913b2dd3fe9143a2e86d617ef84c2a05f7654a6"} Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.105176 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-x5shx" podStartSLOduration=2.968450197 podStartE2EDuration="31.105156172s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.710855626 +0000 UTC m=+1859.644223238" lastFinishedPulling="2026-01-26 13:15:10.847561601 +0000 UTC m=+1887.780929213" observedRunningTime="2026-01-26 13:15:12.099400184 +0000 UTC m=+1889.032767796" watchObservedRunningTime="2026-01-26 13:15:12.105156172 +0000 UTC m=+1889.038523784" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.105712 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dgglg" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.123023 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mwszm" event={"ID":"f8b1471a-3483-4c9e-b662-02906d9b18c0","Type":"ContainerStarted","Data":"aa200d77d01dc75d4d14846481735f72d8a8afdd25dcd7a6a4ea49e653801640"} Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.123078 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mwszm" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.136101 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" event={"ID":"dd52b1ad-222e-4b57-91e0-869bd8094adc","Type":"ContainerStarted","Data":"e112d8ebfda14c4b129c3bb4431fcb4298a26dbd00965245840f8852d119ac12"} Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.136844 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.142939 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pffmq" event={"ID":"8ac12453-5418-4c50-8b2a-61dfad6bf1e1","Type":"ContainerStarted","Data":"473fb547f089c3d267acec0dd1d41f0f2807a52b5a68363c9d406f77cd5411f6"} Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.143712 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pffmq" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.157751 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-krn66" event={"ID":"1eca115f-b8cd-4a50-8adc-2d31e297657f","Type":"ContainerStarted","Data":"d3eea411e87426e3f098ee1d30ff4e2b9ed72f5e33fe63c4f3354927e60ef8c2"} Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.168868 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l7w8f" event={"ID":"89ab862c-0d6a-4a44-9f28-9195e0213328","Type":"ContainerStarted","Data":"5468e2a51829307cc2bf4fe0629afc54ea9f289074ed0ff2a95f2032649df7d7"} Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.169082 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l7w8f" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.171392 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5fc5788b68-9qjpz" podStartSLOduration=4.190006494 podStartE2EDuration="31.17137397s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.887149175 +0000 UTC m=+1859.820516807" lastFinishedPulling="2026-01-26 13:15:09.868516671 +0000 UTC m=+1886.801884283" observedRunningTime="2026-01-26 13:15:12.164801082 +0000 UTC m=+1889.098168694" watchObservedRunningTime="2026-01-26 13:15:12.17137397 +0000 UTC m=+1889.104741582" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.176949 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-bcdf4" podStartSLOduration=16.644893178 podStartE2EDuration="31.176931924s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.447821775 +0000 UTC m=+1859.381189387" lastFinishedPulling="2026-01-26 13:14:56.979860481 +0000 UTC m=+1873.913228133" observedRunningTime="2026-01-26 13:15:12.135193542 +0000 UTC m=+1889.068561154" watchObservedRunningTime="2026-01-26 13:15:12.176931924 +0000 UTC m=+1889.110299536" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.196334 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js" podStartSLOduration=12.196314239 podStartE2EDuration="12.196314239s" podCreationTimestamp="2026-01-26 13:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:15:12.186887012 +0000 UTC m=+1889.120254624" watchObservedRunningTime="2026-01-26 13:15:12.196314239 +0000 UTC m=+1889.129681851" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.204454 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rk7rt" event={"ID":"981956b6-e5c7-4908-a72d-458026f29e4d","Type":"ContainerStarted","Data":"d074bd4f2d61d96637bf60a9ff1f8d767edb92db30483fee5a8e6040109e770b"} Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.205143 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rk7rt" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.219191 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ht7r9" event={"ID":"a60ef848-810d-4c2c-8c23-341d8168e7e7","Type":"ContainerStarted","Data":"439ab1c34c67cd5c59ac5429c46b63637f5720eec82538285708c082ee35beb9"} Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.219736 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ht7r9" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.222986 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wtp6f" event={"ID":"2a343b60-ecc4-4634-9a54-7814555dd3bc","Type":"ContainerStarted","Data":"29375203a90cdbba78e8b9e0c2877f397f7851e0a1a244b020c3e65c1ba38ce3"} Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.231038 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-sm4lj" podStartSLOduration=15.085593459 podStartE2EDuration="31.231017432s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.201103536 +0000 UTC m=+1859.134471148" lastFinishedPulling="2026-01-26 13:14:58.346527519 +0000 UTC m=+1875.279895121" observedRunningTime="2026-01-26 13:15:12.229899065 +0000 UTC m=+1889.163266687" watchObservedRunningTime="2026-01-26 13:15:12.231017432 +0000 UTC m=+1889.164385044" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.264233 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-88kvh" podStartSLOduration=3.43038136 podStartE2EDuration="31.264208007s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.798585981 +0000 UTC m=+1859.731953593" lastFinishedPulling="2026-01-26 13:15:10.632412628 +0000 UTC m=+1887.565780240" observedRunningTime="2026-01-26 13:15:12.256489623 +0000 UTC m=+1889.189857235" watchObservedRunningTime="2026-01-26 13:15:12.264208007 +0000 UTC m=+1889.197575619" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.303898 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l7w8f" podStartSLOduration=4.152559626 podStartE2EDuration="31.30388041s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.715508077 +0000 UTC m=+1859.648875689" lastFinishedPulling="2026-01-26 13:15:09.866828851 +0000 UTC m=+1886.800196473" observedRunningTime="2026-01-26 13:15:12.298110811 +0000 UTC m=+1889.231478423" watchObservedRunningTime="2026-01-26 13:15:12.30388041 +0000 UTC m=+1889.237248022" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.339304 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wtp6f" podStartSLOduration=16.763819532 podStartE2EDuration="31.339276649s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.427275382 +0000 UTC m=+1859.360642994" lastFinishedPulling="2026-01-26 13:14:57.002732499 +0000 UTC m=+1873.936100111" observedRunningTime="2026-01-26 13:15:12.332890205 +0000 UTC m=+1889.266257817" watchObservedRunningTime="2026-01-26 13:15:12.339276649 +0000 UTC m=+1889.272644261" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.358086 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ht7r9" podStartSLOduration=3.147973354 podStartE2EDuration="31.358067539s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.384539337 +0000 UTC m=+1859.317906949" lastFinishedPulling="2026-01-26 13:15:10.594633512 +0000 UTC m=+1887.528001134" observedRunningTime="2026-01-26 13:15:12.357804023 +0000 UTC m=+1889.291171635" watchObservedRunningTime="2026-01-26 13:15:12.358067539 +0000 UTC m=+1889.291435151" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.384435 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pffmq" podStartSLOduration=15.483295559 podStartE2EDuration="31.384417051s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.486988305 +0000 UTC m=+1859.420355917" lastFinishedPulling="2026-01-26 13:14:58.388109797 +0000 UTC m=+1875.321477409" observedRunningTime="2026-01-26 13:15:12.381560123 +0000 UTC m=+1889.314927755" watchObservedRunningTime="2026-01-26 13:15:12.384417051 +0000 UTC m=+1889.317784663" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.440445 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" podStartSLOduration=31.440431885 podStartE2EDuration="31.440431885s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:15:12.435243082 +0000 UTC m=+1889.368610684" watchObservedRunningTime="2026-01-26 13:15:12.440431885 +0000 UTC m=+1889.373799497" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.454981 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rk7rt" podStartSLOduration=15.358659469 podStartE2EDuration="31.454966865s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.291450183 +0000 UTC m=+1859.224817795" lastFinishedPulling="2026-01-26 13:14:58.387757549 +0000 UTC m=+1875.321125191" observedRunningTime="2026-01-26 13:15:12.454501443 +0000 UTC m=+1889.387869055" watchObservedRunningTime="2026-01-26 13:15:12.454966865 +0000 UTC m=+1889.388334477" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.485380 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mwszm" podStartSLOduration=16.693036383 podStartE2EDuration="31.485362083s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.201354922 +0000 UTC m=+1859.134722534" lastFinishedPulling="2026-01-26 13:14:56.993680592 +0000 UTC m=+1873.927048234" observedRunningTime="2026-01-26 13:15:12.483375336 +0000 UTC m=+1889.416742958" watchObservedRunningTime="2026-01-26 13:15:12.485362083 +0000 UTC m=+1889.418729695" Jan 26 13:15:12 crc kubenswrapper[4844]: I0126 13:15:12.509253 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dgglg" podStartSLOduration=4.557221603 podStartE2EDuration="31.509239096s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.89071733 +0000 UTC m=+1859.824084952" lastFinishedPulling="2026-01-26 13:15:09.842734833 +0000 UTC m=+1886.776102445" observedRunningTime="2026-01-26 13:15:12.506562872 +0000 UTC m=+1889.439930484" watchObservedRunningTime="2026-01-26 13:15:12.509239096 +0000 UTC m=+1889.442606698" Jan 26 13:15:13 crc kubenswrapper[4844]: I0126 13:15:13.229679 4844 generic.go:334] "Generic (PLEG): container finished" podID="90ad5427-9763-4ad8-81c9-557978090fbc" containerID="45390072dfb01c4be7a1919aa93b0635d4251eab817368449799b3b552c48972" exitCode=0 Jan 26 13:15:13 crc kubenswrapper[4844]: I0126 13:15:13.230833 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js" event={"ID":"90ad5427-9763-4ad8-81c9-557978090fbc","Type":"ContainerDied","Data":"45390072dfb01c4be7a1919aa93b0635d4251eab817368449799b3b552c48972"} Jan 26 13:15:13 crc kubenswrapper[4844]: I0126 13:15:13.233643 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-sm4lj" Jan 26 13:15:13 crc kubenswrapper[4844]: I0126 13:15:13.233683 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wtp6f" Jan 26 13:15:13 crc kubenswrapper[4844]: I0126 13:15:13.436867 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85478v8f\" (UID: \"12e4b3b0-81a4-4752-8cea-e1a3178d38ba\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" Jan 26 13:15:13 crc kubenswrapper[4844]: I0126 13:15:13.442903 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12e4b3b0-81a4-4752-8cea-e1a3178d38ba-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85478v8f\" (UID: \"12e4b3b0-81a4-4752-8cea-e1a3178d38ba\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" Jan 26 13:15:13 crc kubenswrapper[4844]: I0126 13:15:13.638116 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-scdj6" Jan 26 13:15:13 crc kubenswrapper[4844]: I0126 13:15:13.647784 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" Jan 26 13:15:14 crc kubenswrapper[4844]: I0126 13:15:14.770616 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js" Jan 26 13:15:14 crc kubenswrapper[4844]: I0126 13:15:14.962640 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnsks\" (UniqueName: \"kubernetes.io/projected/90ad5427-9763-4ad8-81c9-557978090fbc-kube-api-access-rnsks\") pod \"90ad5427-9763-4ad8-81c9-557978090fbc\" (UID: \"90ad5427-9763-4ad8-81c9-557978090fbc\") " Jan 26 13:15:14 crc kubenswrapper[4844]: I0126 13:15:14.962705 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ad5427-9763-4ad8-81c9-557978090fbc-config-volume\") pod \"90ad5427-9763-4ad8-81c9-557978090fbc\" (UID: \"90ad5427-9763-4ad8-81c9-557978090fbc\") " Jan 26 13:15:14 crc kubenswrapper[4844]: I0126 13:15:14.962745 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ad5427-9763-4ad8-81c9-557978090fbc-secret-volume\") pod \"90ad5427-9763-4ad8-81c9-557978090fbc\" (UID: \"90ad5427-9763-4ad8-81c9-557978090fbc\") " Jan 26 13:15:14 crc kubenswrapper[4844]: I0126 13:15:14.963630 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90ad5427-9763-4ad8-81c9-557978090fbc-config-volume" (OuterVolumeSpecName: "config-volume") pod "90ad5427-9763-4ad8-81c9-557978090fbc" (UID: "90ad5427-9763-4ad8-81c9-557978090fbc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:15:14 crc kubenswrapper[4844]: I0126 13:15:14.977809 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ad5427-9763-4ad8-81c9-557978090fbc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "90ad5427-9763-4ad8-81c9-557978090fbc" (UID: "90ad5427-9763-4ad8-81c9-557978090fbc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:15:14 crc kubenswrapper[4844]: I0126 13:15:14.990832 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ad5427-9763-4ad8-81c9-557978090fbc-kube-api-access-rnsks" (OuterVolumeSpecName: "kube-api-access-rnsks") pod "90ad5427-9763-4ad8-81c9-557978090fbc" (UID: "90ad5427-9763-4ad8-81c9-557978090fbc"). InnerVolumeSpecName "kube-api-access-rnsks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:15:15 crc kubenswrapper[4844]: I0126 13:15:15.064079 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnsks\" (UniqueName: \"kubernetes.io/projected/90ad5427-9763-4ad8-81c9-557978090fbc-kube-api-access-rnsks\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:15 crc kubenswrapper[4844]: I0126 13:15:15.064115 4844 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ad5427-9763-4ad8-81c9-557978090fbc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:15 crc kubenswrapper[4844]: I0126 13:15:15.064125 4844 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ad5427-9763-4ad8-81c9-557978090fbc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 13:15:15 crc kubenswrapper[4844]: I0126 13:15:15.251982 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj" event={"ID":"8b9f2639-4aaa-463a-b950-fc39fca31805","Type":"ContainerStarted","Data":"252d40c252a5e7eecced6ea857a7416eb651e573929fa2e365c868b03fd98fff"} Jan 26 13:15:15 crc kubenswrapper[4844]: I0126 13:15:15.252163 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj" Jan 26 13:15:15 crc kubenswrapper[4844]: I0126 13:15:15.253824 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js" event={"ID":"90ad5427-9763-4ad8-81c9-557978090fbc","Type":"ContainerDied","Data":"88d85306e708258571254e32e1e175f9223d14a12cde8480f7be904a48adbf96"} Jan 26 13:15:15 crc kubenswrapper[4844]: I0126 13:15:15.253852 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88d85306e708258571254e32e1e175f9223d14a12cde8480f7be904a48adbf96" Jan 26 13:15:15 crc kubenswrapper[4844]: I0126 13:15:15.253898 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js" Jan 26 13:15:15 crc kubenswrapper[4844]: I0126 13:15:15.266457 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj" podStartSLOduration=30.02596401 podStartE2EDuration="34.266437346s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:15:10.537998503 +0000 UTC m=+1887.471366125" lastFinishedPulling="2026-01-26 13:15:14.778471849 +0000 UTC m=+1891.711839461" observedRunningTime="2026-01-26 13:15:15.265145215 +0000 UTC m=+1892.198512867" watchObservedRunningTime="2026-01-26 13:15:15.266437346 +0000 UTC m=+1892.199804958" Jan 26 13:15:15 crc kubenswrapper[4844]: I0126 13:15:15.305092 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f"] Jan 26 13:15:15 crc kubenswrapper[4844]: W0126 13:15:15.308924 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12e4b3b0_81a4_4752_8cea_e1a3178d38ba.slice/crio-48575176fb150fd9507eeec893d23d216fc03475b6a79c64b8536364b144a337 WatchSource:0}: Error finding container 48575176fb150fd9507eeec893d23d216fc03475b6a79c64b8536364b144a337: Status 404 returned error can't find the container with id 48575176fb150fd9507eeec893d23d216fc03475b6a79c64b8536364b144a337 Jan 26 13:15:16 crc kubenswrapper[4844]: I0126 13:15:16.267457 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" event={"ID":"12e4b3b0-81a4-4752-8cea-e1a3178d38ba","Type":"ContainerStarted","Data":"48575176fb150fd9507eeec893d23d216fc03475b6a79c64b8536364b144a337"} Jan 26 13:15:17 crc kubenswrapper[4844]: I0126 13:15:17.278154 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" event={"ID":"12e4b3b0-81a4-4752-8cea-e1a3178d38ba","Type":"ContainerStarted","Data":"24151e22c0972d464b7c23072298b8ce04b6dc90abf28c7cc8df4d1b8a474bb7"} Jan 26 13:15:17 crc kubenswrapper[4844]: I0126 13:15:17.278640 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" Jan 26 13:15:17 crc kubenswrapper[4844]: I0126 13:15:17.318526 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" podStartSLOduration=34.624612538 podStartE2EDuration="36.318496938s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:15:15.311670141 +0000 UTC m=+1892.245037783" lastFinishedPulling="2026-01-26 13:15:17.005554571 +0000 UTC m=+1893.938922183" observedRunningTime="2026-01-26 13:15:17.30689746 +0000 UTC m=+1894.240265112" watchObservedRunningTime="2026-01-26 13:15:17.318496938 +0000 UTC m=+1894.251864590" Jan 26 13:15:18 crc kubenswrapper[4844]: I0126 13:15:18.058471 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6b75585dc8-tzrcv" Jan 26 13:15:21 crc kubenswrapper[4844]: I0126 13:15:21.385077 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-sm4lj" Jan 26 13:15:21 crc kubenswrapper[4844]: I0126 13:15:21.399816 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5tq86" Jan 26 13:15:21 crc kubenswrapper[4844]: I0126 13:15:21.424173 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmfsm" Jan 26 13:15:21 crc kubenswrapper[4844]: I0126 13:15:21.457025 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mwszm" Jan 26 13:15:21 crc kubenswrapper[4844]: I0126 13:15:21.492226 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k8f6n" Jan 26 13:15:21 crc kubenswrapper[4844]: I0126 13:15:21.517326 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rk7rt" Jan 26 13:15:21 crc kubenswrapper[4844]: I0126 13:15:21.656421 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ht7r9" Jan 26 13:15:21 crc kubenswrapper[4844]: I0126 13:15:21.680085 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wtp6f" Jan 26 13:15:21 crc kubenswrapper[4844]: I0126 13:15:21.770751 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-bcdf4" Jan 26 13:15:21 crc kubenswrapper[4844]: I0126 13:15:21.790900 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-pffmq" Jan 26 13:15:21 crc kubenswrapper[4844]: I0126 13:15:21.815860 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-x5shx" Jan 26 13:15:21 crc kubenswrapper[4844]: I0126 13:15:21.825897 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-566vm" Jan 26 13:15:21 crc kubenswrapper[4844]: I0126 13:15:21.843093 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l7w8f" Jan 26 13:15:21 crc kubenswrapper[4844]: I0126 13:15:21.877830 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-krn66" Jan 26 13:15:21 crc kubenswrapper[4844]: I0126 13:15:21.918397 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mkcr9" Jan 26 13:15:21 crc kubenswrapper[4844]: I0126 13:15:21.945917 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-88kvh" Jan 26 13:15:21 crc kubenswrapper[4844]: I0126 13:15:21.981296 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dgglg" Jan 26 13:15:22 crc kubenswrapper[4844]: I0126 13:15:22.072453 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5fc5788b68-9qjpz" Jan 26 13:15:23 crc kubenswrapper[4844]: I0126 13:15:23.656911 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85478v8f" Jan 26 13:15:24 crc kubenswrapper[4844]: E0126 13:15:24.316337 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8s4vt" podUID="e99dde4f-0ab1-45ad-b6c0-e5225fbfc77d" Jan 26 13:15:25 crc kubenswrapper[4844]: E0126 13:15:25.315158 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fj29j" podUID="9fb0454b-90d4-48f3-b069-86aada20e9f9" Jan 26 13:15:27 crc kubenswrapper[4844]: I0126 13:15:27.484912 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vzncj" Jan 26 13:15:44 crc kubenswrapper[4844]: I0126 13:15:44.533338 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fj29j" event={"ID":"9fb0454b-90d4-48f3-b069-86aada20e9f9","Type":"ContainerStarted","Data":"dd71940a6e913ab6562d9454ffe09a76d8b70df80d70bdf12b00dd65d3fbf390"} Jan 26 13:15:44 crc kubenswrapper[4844]: I0126 13:15:44.534087 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fj29j" Jan 26 13:15:44 crc kubenswrapper[4844]: I0126 13:15:44.535277 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8s4vt" event={"ID":"e99dde4f-0ab1-45ad-b6c0-e5225fbfc77d","Type":"ContainerStarted","Data":"064cb48a83dd19a7ff270011b3e32dfa29ec287ff9c1321a96d033c7737865b7"} Jan 26 13:15:44 crc kubenswrapper[4844]: I0126 13:15:44.558572 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fj29j" podStartSLOduration=4.30112702 podStartE2EDuration="1m3.55854788s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.788214571 +0000 UTC m=+1859.721582183" lastFinishedPulling="2026-01-26 13:15:42.045635421 +0000 UTC m=+1918.979003043" observedRunningTime="2026-01-26 13:15:44.552034864 +0000 UTC m=+1921.485402506" watchObservedRunningTime="2026-01-26 13:15:44.55854788 +0000 UTC m=+1921.491915532" Jan 26 13:15:44 crc kubenswrapper[4844]: I0126 13:15:44.583651 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8s4vt" podStartSLOduration=4.777195313 podStartE2EDuration="1m3.583584761s" podCreationTimestamp="2026-01-26 13:14:41 +0000 UTC" firstStartedPulling="2026-01-26 13:14:42.884169794 +0000 UTC m=+1859.817537416" lastFinishedPulling="2026-01-26 13:15:41.690559212 +0000 UTC m=+1918.623926864" observedRunningTime="2026-01-26 13:15:44.574881822 +0000 UTC m=+1921.508249484" watchObservedRunningTime="2026-01-26 13:15:44.583584761 +0000 UTC m=+1921.516952403" Jan 26 13:15:51 crc kubenswrapper[4844]: I0126 13:15:51.957974 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fj29j" Jan 26 13:16:15 crc kubenswrapper[4844]: I0126 13:16:15.122381 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-668dp"] Jan 26 13:16:15 crc kubenswrapper[4844]: E0126 13:16:15.123207 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ad5427-9763-4ad8-81c9-557978090fbc" containerName="collect-profiles" Jan 26 13:16:15 crc kubenswrapper[4844]: I0126 13:16:15.123222 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ad5427-9763-4ad8-81c9-557978090fbc" containerName="collect-profiles" Jan 26 13:16:15 crc kubenswrapper[4844]: I0126 13:16:15.123420 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="90ad5427-9763-4ad8-81c9-557978090fbc" containerName="collect-profiles" Jan 26 13:16:15 crc kubenswrapper[4844]: I0126 13:16:15.125776 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-668dp" Jan 26 13:16:15 crc kubenswrapper[4844]: I0126 13:16:15.132985 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-668dp"] Jan 26 13:16:15 crc kubenswrapper[4844]: I0126 13:16:15.298476 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7168ed0-bfcd-4904-8708-4ab86671ebcc-catalog-content\") pod \"certified-operators-668dp\" (UID: \"c7168ed0-bfcd-4904-8708-4ab86671ebcc\") " pod="openshift-marketplace/certified-operators-668dp" Jan 26 13:16:15 crc kubenswrapper[4844]: I0126 13:16:15.298535 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnqcq\" (UniqueName: \"kubernetes.io/projected/c7168ed0-bfcd-4904-8708-4ab86671ebcc-kube-api-access-cnqcq\") pod \"certified-operators-668dp\" (UID: \"c7168ed0-bfcd-4904-8708-4ab86671ebcc\") " pod="openshift-marketplace/certified-operators-668dp" Jan 26 13:16:15 crc kubenswrapper[4844]: I0126 13:16:15.298861 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7168ed0-bfcd-4904-8708-4ab86671ebcc-utilities\") pod \"certified-operators-668dp\" (UID: \"c7168ed0-bfcd-4904-8708-4ab86671ebcc\") " pod="openshift-marketplace/certified-operators-668dp" Jan 26 13:16:15 crc kubenswrapper[4844]: I0126 13:16:15.400095 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7168ed0-bfcd-4904-8708-4ab86671ebcc-catalog-content\") pod \"certified-operators-668dp\" (UID: \"c7168ed0-bfcd-4904-8708-4ab86671ebcc\") " pod="openshift-marketplace/certified-operators-668dp" Jan 26 13:16:15 crc kubenswrapper[4844]: I0126 13:16:15.400151 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnqcq\" (UniqueName: \"kubernetes.io/projected/c7168ed0-bfcd-4904-8708-4ab86671ebcc-kube-api-access-cnqcq\") pod \"certified-operators-668dp\" (UID: \"c7168ed0-bfcd-4904-8708-4ab86671ebcc\") " pod="openshift-marketplace/certified-operators-668dp" Jan 26 13:16:15 crc kubenswrapper[4844]: I0126 13:16:15.400274 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7168ed0-bfcd-4904-8708-4ab86671ebcc-utilities\") pod \"certified-operators-668dp\" (UID: \"c7168ed0-bfcd-4904-8708-4ab86671ebcc\") " pod="openshift-marketplace/certified-operators-668dp" Jan 26 13:16:15 crc kubenswrapper[4844]: I0126 13:16:15.400610 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7168ed0-bfcd-4904-8708-4ab86671ebcc-catalog-content\") pod \"certified-operators-668dp\" (UID: \"c7168ed0-bfcd-4904-8708-4ab86671ebcc\") " pod="openshift-marketplace/certified-operators-668dp" Jan 26 13:16:15 crc kubenswrapper[4844]: I0126 13:16:15.400703 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7168ed0-bfcd-4904-8708-4ab86671ebcc-utilities\") pod \"certified-operators-668dp\" (UID: \"c7168ed0-bfcd-4904-8708-4ab86671ebcc\") " pod="openshift-marketplace/certified-operators-668dp" Jan 26 13:16:15 crc kubenswrapper[4844]: I0126 13:16:15.428590 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnqcq\" (UniqueName: \"kubernetes.io/projected/c7168ed0-bfcd-4904-8708-4ab86671ebcc-kube-api-access-cnqcq\") pod \"certified-operators-668dp\" (UID: \"c7168ed0-bfcd-4904-8708-4ab86671ebcc\") " pod="openshift-marketplace/certified-operators-668dp" Jan 26 13:16:15 crc kubenswrapper[4844]: I0126 13:16:15.450765 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-668dp" Jan 26 13:16:15 crc kubenswrapper[4844]: I0126 13:16:15.948439 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-668dp"] Jan 26 13:16:16 crc kubenswrapper[4844]: I0126 13:16:16.761079 4844 generic.go:334] "Generic (PLEG): container finished" podID="c7168ed0-bfcd-4904-8708-4ab86671ebcc" containerID="5ef36018ce036cb1ff761655f82e8ba58088c557727e7ae3f1ee2a475d418203" exitCode=0 Jan 26 13:16:16 crc kubenswrapper[4844]: I0126 13:16:16.761185 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-668dp" event={"ID":"c7168ed0-bfcd-4904-8708-4ab86671ebcc","Type":"ContainerDied","Data":"5ef36018ce036cb1ff761655f82e8ba58088c557727e7ae3f1ee2a475d418203"} Jan 26 13:16:16 crc kubenswrapper[4844]: I0126 13:16:16.761344 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-668dp" event={"ID":"c7168ed0-bfcd-4904-8708-4ab86671ebcc","Type":"ContainerStarted","Data":"bf73745db7b7d06a0c319274c546e57d0bf6277265bddd87f5a34b25acfd82ce"} Jan 26 13:16:17 crc kubenswrapper[4844]: I0126 13:16:17.520798 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9fhfr"] Jan 26 13:16:17 crc kubenswrapper[4844]: I0126 13:16:17.523104 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9fhfr" Jan 26 13:16:17 crc kubenswrapper[4844]: I0126 13:16:17.572484 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9fhfr"] Jan 26 13:16:17 crc kubenswrapper[4844]: I0126 13:16:17.641783 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtdgd\" (UniqueName: \"kubernetes.io/projected/1a72d1a1-fc4b-451c-95f5-fe163e63e95d-kube-api-access-xtdgd\") pod \"community-operators-9fhfr\" (UID: \"1a72d1a1-fc4b-451c-95f5-fe163e63e95d\") " pod="openshift-marketplace/community-operators-9fhfr" Jan 26 13:16:17 crc kubenswrapper[4844]: I0126 13:16:17.641838 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a72d1a1-fc4b-451c-95f5-fe163e63e95d-catalog-content\") pod \"community-operators-9fhfr\" (UID: \"1a72d1a1-fc4b-451c-95f5-fe163e63e95d\") " pod="openshift-marketplace/community-operators-9fhfr" Jan 26 13:16:17 crc kubenswrapper[4844]: I0126 13:16:17.641919 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a72d1a1-fc4b-451c-95f5-fe163e63e95d-utilities\") pod \"community-operators-9fhfr\" (UID: \"1a72d1a1-fc4b-451c-95f5-fe163e63e95d\") " pod="openshift-marketplace/community-operators-9fhfr" Jan 26 13:16:17 crc kubenswrapper[4844]: I0126 13:16:17.743732 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtdgd\" (UniqueName: \"kubernetes.io/projected/1a72d1a1-fc4b-451c-95f5-fe163e63e95d-kube-api-access-xtdgd\") pod \"community-operators-9fhfr\" (UID: \"1a72d1a1-fc4b-451c-95f5-fe163e63e95d\") " pod="openshift-marketplace/community-operators-9fhfr" Jan 26 13:16:17 crc kubenswrapper[4844]: I0126 13:16:17.743778 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a72d1a1-fc4b-451c-95f5-fe163e63e95d-catalog-content\") pod \"community-operators-9fhfr\" (UID: \"1a72d1a1-fc4b-451c-95f5-fe163e63e95d\") " pod="openshift-marketplace/community-operators-9fhfr" Jan 26 13:16:17 crc kubenswrapper[4844]: I0126 13:16:17.743825 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a72d1a1-fc4b-451c-95f5-fe163e63e95d-utilities\") pod \"community-operators-9fhfr\" (UID: \"1a72d1a1-fc4b-451c-95f5-fe163e63e95d\") " pod="openshift-marketplace/community-operators-9fhfr" Jan 26 13:16:17 crc kubenswrapper[4844]: I0126 13:16:17.744316 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a72d1a1-fc4b-451c-95f5-fe163e63e95d-utilities\") pod \"community-operators-9fhfr\" (UID: \"1a72d1a1-fc4b-451c-95f5-fe163e63e95d\") " pod="openshift-marketplace/community-operators-9fhfr" Jan 26 13:16:17 crc kubenswrapper[4844]: I0126 13:16:17.744912 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a72d1a1-fc4b-451c-95f5-fe163e63e95d-catalog-content\") pod \"community-operators-9fhfr\" (UID: \"1a72d1a1-fc4b-451c-95f5-fe163e63e95d\") " pod="openshift-marketplace/community-operators-9fhfr" Jan 26 13:16:17 crc kubenswrapper[4844]: I0126 13:16:17.762014 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtdgd\" (UniqueName: \"kubernetes.io/projected/1a72d1a1-fc4b-451c-95f5-fe163e63e95d-kube-api-access-xtdgd\") pod \"community-operators-9fhfr\" (UID: \"1a72d1a1-fc4b-451c-95f5-fe163e63e95d\") " pod="openshift-marketplace/community-operators-9fhfr" Jan 26 13:16:17 crc kubenswrapper[4844]: I0126 13:16:17.769659 4844 generic.go:334] "Generic (PLEG): container finished" podID="c7168ed0-bfcd-4904-8708-4ab86671ebcc" containerID="16eef6f4f314fff596ace4a1a75ed812f752feb4c2a9a6d055290cabc52750d5" exitCode=0 Jan 26 13:16:17 crc kubenswrapper[4844]: I0126 13:16:17.769725 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-668dp" event={"ID":"c7168ed0-bfcd-4904-8708-4ab86671ebcc","Type":"ContainerDied","Data":"16eef6f4f314fff596ace4a1a75ed812f752feb4c2a9a6d055290cabc52750d5"} Jan 26 13:16:17 crc kubenswrapper[4844]: I0126 13:16:17.839705 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9fhfr" Jan 26 13:16:18 crc kubenswrapper[4844]: I0126 13:16:18.304916 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9fhfr"] Jan 26 13:16:18 crc kubenswrapper[4844]: W0126 13:16:18.311786 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a72d1a1_fc4b_451c_95f5_fe163e63e95d.slice/crio-686a867f65e8ca9a83b446c4040f85fa4e2223ff52420d328ed68fb421ecaa38 WatchSource:0}: Error finding container 686a867f65e8ca9a83b446c4040f85fa4e2223ff52420d328ed68fb421ecaa38: Status 404 returned error can't find the container with id 686a867f65e8ca9a83b446c4040f85fa4e2223ff52420d328ed68fb421ecaa38 Jan 26 13:16:18 crc kubenswrapper[4844]: I0126 13:16:18.778136 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-668dp" event={"ID":"c7168ed0-bfcd-4904-8708-4ab86671ebcc","Type":"ContainerStarted","Data":"e2ef081259d65d0c797a38b066cee2af793e4fcfb487d0c5415d54db072b05b0"} Jan 26 13:16:18 crc kubenswrapper[4844]: I0126 13:16:18.779992 4844 generic.go:334] "Generic (PLEG): container finished" podID="1a72d1a1-fc4b-451c-95f5-fe163e63e95d" containerID="c178b7dd58fa440d5a2c1f87d63df18d4ee2a9cd1328a01cbdfc98db47f26831" exitCode=0 Jan 26 13:16:18 crc kubenswrapper[4844]: I0126 13:16:18.780031 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fhfr" event={"ID":"1a72d1a1-fc4b-451c-95f5-fe163e63e95d","Type":"ContainerDied","Data":"c178b7dd58fa440d5a2c1f87d63df18d4ee2a9cd1328a01cbdfc98db47f26831"} Jan 26 13:16:18 crc kubenswrapper[4844]: I0126 13:16:18.780068 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fhfr" event={"ID":"1a72d1a1-fc4b-451c-95f5-fe163e63e95d","Type":"ContainerStarted","Data":"686a867f65e8ca9a83b446c4040f85fa4e2223ff52420d328ed68fb421ecaa38"} Jan 26 13:16:18 crc kubenswrapper[4844]: I0126 13:16:18.816167 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-668dp" podStartSLOduration=2.190258949 podStartE2EDuration="3.816145426s" podCreationTimestamp="2026-01-26 13:16:15 +0000 UTC" firstStartedPulling="2026-01-26 13:16:16.763854528 +0000 UTC m=+1953.697222140" lastFinishedPulling="2026-01-26 13:16:18.389740995 +0000 UTC m=+1955.323108617" observedRunningTime="2026-01-26 13:16:18.815048979 +0000 UTC m=+1955.748416611" watchObservedRunningTime="2026-01-26 13:16:18.816145426 +0000 UTC m=+1955.749513038" Jan 26 13:16:19 crc kubenswrapper[4844]: I0126 13:16:19.787568 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fhfr" event={"ID":"1a72d1a1-fc4b-451c-95f5-fe163e63e95d","Type":"ContainerStarted","Data":"74b0199c206325bb06fce5efe35539b492787372777c0876d6e6795662ede299"} Jan 26 13:16:20 crc kubenswrapper[4844]: I0126 13:16:20.799243 4844 generic.go:334] "Generic (PLEG): container finished" podID="1a72d1a1-fc4b-451c-95f5-fe163e63e95d" containerID="74b0199c206325bb06fce5efe35539b492787372777c0876d6e6795662ede299" exitCode=0 Jan 26 13:16:20 crc kubenswrapper[4844]: I0126 13:16:20.799293 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fhfr" event={"ID":"1a72d1a1-fc4b-451c-95f5-fe163e63e95d","Type":"ContainerDied","Data":"74b0199c206325bb06fce5efe35539b492787372777c0876d6e6795662ede299"} Jan 26 13:16:21 crc kubenswrapper[4844]: I0126 13:16:21.807237 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fhfr" event={"ID":"1a72d1a1-fc4b-451c-95f5-fe163e63e95d","Type":"ContainerStarted","Data":"af747ed94b42d1607943eb878b8baba218784aef58c779ca40a277e5e6282acd"} Jan 26 13:16:21 crc kubenswrapper[4844]: I0126 13:16:21.831824 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9fhfr" podStartSLOduration=2.411213913 podStartE2EDuration="4.831806127s" podCreationTimestamp="2026-01-26 13:16:17 +0000 UTC" firstStartedPulling="2026-01-26 13:16:18.782759295 +0000 UTC m=+1955.716126907" lastFinishedPulling="2026-01-26 13:16:21.203351509 +0000 UTC m=+1958.136719121" observedRunningTime="2026-01-26 13:16:21.830358751 +0000 UTC m=+1958.763726363" watchObservedRunningTime="2026-01-26 13:16:21.831806127 +0000 UTC m=+1958.765173749" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.178253 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75f87779c-fqxxt"] Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.182888 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f87779c-fqxxt" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.188080 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.188157 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5cf6j" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.188354 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.188672 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.189703 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75f87779c-fqxxt"] Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.235391 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8mw8\" (UniqueName: \"kubernetes.io/projected/c1c80673-1b5a-43ca-9bf2-79762e902cd1-kube-api-access-w8mw8\") pod \"dnsmasq-dns-75f87779c-fqxxt\" (UID: \"c1c80673-1b5a-43ca-9bf2-79762e902cd1\") " pod="openstack/dnsmasq-dns-75f87779c-fqxxt" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.237215 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1c80673-1b5a-43ca-9bf2-79762e902cd1-config\") pod \"dnsmasq-dns-75f87779c-fqxxt\" (UID: \"c1c80673-1b5a-43ca-9bf2-79762e902cd1\") " pod="openstack/dnsmasq-dns-75f87779c-fqxxt" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.248806 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586ffd88f7-b82rf"] Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.250158 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586ffd88f7-b82rf" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.252816 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.269713 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586ffd88f7-b82rf"] Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.338270 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1c80673-1b5a-43ca-9bf2-79762e902cd1-config\") pod \"dnsmasq-dns-75f87779c-fqxxt\" (UID: \"c1c80673-1b5a-43ca-9bf2-79762e902cd1\") " pod="openstack/dnsmasq-dns-75f87779c-fqxxt" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.338343 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19db5512-9121-4f15-90a3-0ce718ae58d8-dns-svc\") pod \"dnsmasq-dns-586ffd88f7-b82rf\" (UID: \"19db5512-9121-4f15-90a3-0ce718ae58d8\") " pod="openstack/dnsmasq-dns-586ffd88f7-b82rf" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.338393 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jws49\" (UniqueName: \"kubernetes.io/projected/19db5512-9121-4f15-90a3-0ce718ae58d8-kube-api-access-jws49\") pod \"dnsmasq-dns-586ffd88f7-b82rf\" (UID: \"19db5512-9121-4f15-90a3-0ce718ae58d8\") " pod="openstack/dnsmasq-dns-586ffd88f7-b82rf" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.338422 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19db5512-9121-4f15-90a3-0ce718ae58d8-config\") pod \"dnsmasq-dns-586ffd88f7-b82rf\" (UID: \"19db5512-9121-4f15-90a3-0ce718ae58d8\") " pod="openstack/dnsmasq-dns-586ffd88f7-b82rf" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.338469 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8mw8\" (UniqueName: \"kubernetes.io/projected/c1c80673-1b5a-43ca-9bf2-79762e902cd1-kube-api-access-w8mw8\") pod \"dnsmasq-dns-75f87779c-fqxxt\" (UID: \"c1c80673-1b5a-43ca-9bf2-79762e902cd1\") " pod="openstack/dnsmasq-dns-75f87779c-fqxxt" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.339205 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1c80673-1b5a-43ca-9bf2-79762e902cd1-config\") pod \"dnsmasq-dns-75f87779c-fqxxt\" (UID: \"c1c80673-1b5a-43ca-9bf2-79762e902cd1\") " pod="openstack/dnsmasq-dns-75f87779c-fqxxt" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.357934 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8mw8\" (UniqueName: \"kubernetes.io/projected/c1c80673-1b5a-43ca-9bf2-79762e902cd1-kube-api-access-w8mw8\") pod \"dnsmasq-dns-75f87779c-fqxxt\" (UID: \"c1c80673-1b5a-43ca-9bf2-79762e902cd1\") " pod="openstack/dnsmasq-dns-75f87779c-fqxxt" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.439665 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19db5512-9121-4f15-90a3-0ce718ae58d8-dns-svc\") pod \"dnsmasq-dns-586ffd88f7-b82rf\" (UID: \"19db5512-9121-4f15-90a3-0ce718ae58d8\") " pod="openstack/dnsmasq-dns-586ffd88f7-b82rf" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.439724 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jws49\" (UniqueName: \"kubernetes.io/projected/19db5512-9121-4f15-90a3-0ce718ae58d8-kube-api-access-jws49\") pod \"dnsmasq-dns-586ffd88f7-b82rf\" (UID: \"19db5512-9121-4f15-90a3-0ce718ae58d8\") " pod="openstack/dnsmasq-dns-586ffd88f7-b82rf" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.439762 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19db5512-9121-4f15-90a3-0ce718ae58d8-config\") pod \"dnsmasq-dns-586ffd88f7-b82rf\" (UID: \"19db5512-9121-4f15-90a3-0ce718ae58d8\") " pod="openstack/dnsmasq-dns-586ffd88f7-b82rf" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.440572 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19db5512-9121-4f15-90a3-0ce718ae58d8-config\") pod \"dnsmasq-dns-586ffd88f7-b82rf\" (UID: \"19db5512-9121-4f15-90a3-0ce718ae58d8\") " pod="openstack/dnsmasq-dns-586ffd88f7-b82rf" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.440665 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19db5512-9121-4f15-90a3-0ce718ae58d8-dns-svc\") pod \"dnsmasq-dns-586ffd88f7-b82rf\" (UID: \"19db5512-9121-4f15-90a3-0ce718ae58d8\") " pod="openstack/dnsmasq-dns-586ffd88f7-b82rf" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.455236 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jws49\" (UniqueName: \"kubernetes.io/projected/19db5512-9121-4f15-90a3-0ce718ae58d8-kube-api-access-jws49\") pod \"dnsmasq-dns-586ffd88f7-b82rf\" (UID: \"19db5512-9121-4f15-90a3-0ce718ae58d8\") " pod="openstack/dnsmasq-dns-586ffd88f7-b82rf" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.511417 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f87779c-fqxxt" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.569803 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586ffd88f7-b82rf" Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.738417 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75f87779c-fqxxt"] Jan 26 13:16:24 crc kubenswrapper[4844]: W0126 13:16:24.740172 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1c80673_1b5a_43ca_9bf2_79762e902cd1.slice/crio-322587710b612c4317cf0196ab7020ef2a5dec4137ea76001a95ad75fff634ef WatchSource:0}: Error finding container 322587710b612c4317cf0196ab7020ef2a5dec4137ea76001a95ad75fff634ef: Status 404 returned error can't find the container with id 322587710b612c4317cf0196ab7020ef2a5dec4137ea76001a95ad75fff634ef Jan 26 13:16:24 crc kubenswrapper[4844]: I0126 13:16:24.831755 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f87779c-fqxxt" event={"ID":"c1c80673-1b5a-43ca-9bf2-79762e902cd1","Type":"ContainerStarted","Data":"322587710b612c4317cf0196ab7020ef2a5dec4137ea76001a95ad75fff634ef"} Jan 26 13:16:25 crc kubenswrapper[4844]: W0126 13:16:25.045644 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19db5512_9121_4f15_90a3_0ce718ae58d8.slice/crio-b8bf793ee2f4ebd01722d4337549c374b51886fdcbe33117c24eac3faf38beed WatchSource:0}: Error finding container b8bf793ee2f4ebd01722d4337549c374b51886fdcbe33117c24eac3faf38beed: Status 404 returned error can't find the container with id b8bf793ee2f4ebd01722d4337549c374b51886fdcbe33117c24eac3faf38beed Jan 26 13:16:25 crc kubenswrapper[4844]: I0126 13:16:25.049476 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586ffd88f7-b82rf"] Jan 26 13:16:25 crc kubenswrapper[4844]: I0126 13:16:25.451429 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-668dp" Jan 26 13:16:25 crc kubenswrapper[4844]: I0126 13:16:25.451485 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-668dp" Jan 26 13:16:25 crc kubenswrapper[4844]: I0126 13:16:25.530371 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-668dp" Jan 26 13:16:25 crc kubenswrapper[4844]: I0126 13:16:25.840124 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586ffd88f7-b82rf" event={"ID":"19db5512-9121-4f15-90a3-0ce718ae58d8","Type":"ContainerStarted","Data":"b8bf793ee2f4ebd01722d4337549c374b51886fdcbe33117c24eac3faf38beed"} Jan 26 13:16:25 crc kubenswrapper[4844]: I0126 13:16:25.880061 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-668dp" Jan 26 13:16:26 crc kubenswrapper[4844]: I0126 13:16:26.918790 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-668dp"] Jan 26 13:16:27 crc kubenswrapper[4844]: I0126 13:16:27.809355 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586ffd88f7-b82rf"] Jan 26 13:16:27 crc kubenswrapper[4844]: I0126 13:16:27.839527 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bccbb886f-mstqs"] Jan 26 13:16:27 crc kubenswrapper[4844]: I0126 13:16:27.841085 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bccbb886f-mstqs" Jan 26 13:16:27 crc kubenswrapper[4844]: I0126 13:16:27.841217 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9fhfr" Jan 26 13:16:27 crc kubenswrapper[4844]: I0126 13:16:27.841801 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9fhfr" Jan 26 13:16:27 crc kubenswrapper[4844]: I0126 13:16:27.856597 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-668dp" podUID="c7168ed0-bfcd-4904-8708-4ab86671ebcc" containerName="registry-server" containerID="cri-o://e2ef081259d65d0c797a38b066cee2af793e4fcfb487d0c5415d54db072b05b0" gracePeriod=2 Jan 26 13:16:27 crc kubenswrapper[4844]: I0126 13:16:27.863193 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bccbb886f-mstqs"] Jan 26 13:16:27 crc kubenswrapper[4844]: I0126 13:16:27.913324 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9fhfr" Jan 26 13:16:27 crc kubenswrapper[4844]: I0126 13:16:27.998967 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35ee2046-1d54-4ff1-a512-060c6c8ad0a3-config\") pod \"dnsmasq-dns-6bccbb886f-mstqs\" (UID: \"35ee2046-1d54-4ff1-a512-060c6c8ad0a3\") " pod="openstack/dnsmasq-dns-6bccbb886f-mstqs" Jan 26 13:16:27 crc kubenswrapper[4844]: I0126 13:16:27.999299 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35ee2046-1d54-4ff1-a512-060c6c8ad0a3-dns-svc\") pod \"dnsmasq-dns-6bccbb886f-mstqs\" (UID: \"35ee2046-1d54-4ff1-a512-060c6c8ad0a3\") " pod="openstack/dnsmasq-dns-6bccbb886f-mstqs" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:27.999323 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbrcs\" (UniqueName: \"kubernetes.io/projected/35ee2046-1d54-4ff1-a512-060c6c8ad0a3-kube-api-access-sbrcs\") pod \"dnsmasq-dns-6bccbb886f-mstqs\" (UID: \"35ee2046-1d54-4ff1-a512-060c6c8ad0a3\") " pod="openstack/dnsmasq-dns-6bccbb886f-mstqs" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.100977 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35ee2046-1d54-4ff1-a512-060c6c8ad0a3-dns-svc\") pod \"dnsmasq-dns-6bccbb886f-mstqs\" (UID: \"35ee2046-1d54-4ff1-a512-060c6c8ad0a3\") " pod="openstack/dnsmasq-dns-6bccbb886f-mstqs" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.101023 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrcs\" (UniqueName: \"kubernetes.io/projected/35ee2046-1d54-4ff1-a512-060c6c8ad0a3-kube-api-access-sbrcs\") pod \"dnsmasq-dns-6bccbb886f-mstqs\" (UID: \"35ee2046-1d54-4ff1-a512-060c6c8ad0a3\") " pod="openstack/dnsmasq-dns-6bccbb886f-mstqs" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.101126 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35ee2046-1d54-4ff1-a512-060c6c8ad0a3-config\") pod \"dnsmasq-dns-6bccbb886f-mstqs\" (UID: \"35ee2046-1d54-4ff1-a512-060c6c8ad0a3\") " pod="openstack/dnsmasq-dns-6bccbb886f-mstqs" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.103127 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35ee2046-1d54-4ff1-a512-060c6c8ad0a3-dns-svc\") pod \"dnsmasq-dns-6bccbb886f-mstqs\" (UID: \"35ee2046-1d54-4ff1-a512-060c6c8ad0a3\") " pod="openstack/dnsmasq-dns-6bccbb886f-mstqs" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.103159 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35ee2046-1d54-4ff1-a512-060c6c8ad0a3-config\") pod \"dnsmasq-dns-6bccbb886f-mstqs\" (UID: \"35ee2046-1d54-4ff1-a512-060c6c8ad0a3\") " pod="openstack/dnsmasq-dns-6bccbb886f-mstqs" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.124749 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbrcs\" (UniqueName: \"kubernetes.io/projected/35ee2046-1d54-4ff1-a512-060c6c8ad0a3-kube-api-access-sbrcs\") pod \"dnsmasq-dns-6bccbb886f-mstqs\" (UID: \"35ee2046-1d54-4ff1-a512-060c6c8ad0a3\") " pod="openstack/dnsmasq-dns-6bccbb886f-mstqs" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.176941 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bccbb886f-mstqs" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.180377 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75f87779c-fqxxt"] Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.218098 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-559648544f-cwdch"] Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.219942 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559648544f-cwdch" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.226067 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-559648544f-cwdch"] Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.410338 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58298af3-1f5e-464f-9af7-70f300b48267-dns-svc\") pod \"dnsmasq-dns-559648544f-cwdch\" (UID: \"58298af3-1f5e-464f-9af7-70f300b48267\") " pod="openstack/dnsmasq-dns-559648544f-cwdch" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.410667 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqh8z\" (UniqueName: \"kubernetes.io/projected/58298af3-1f5e-464f-9af7-70f300b48267-kube-api-access-qqh8z\") pod \"dnsmasq-dns-559648544f-cwdch\" (UID: \"58298af3-1f5e-464f-9af7-70f300b48267\") " pod="openstack/dnsmasq-dns-559648544f-cwdch" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.410720 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58298af3-1f5e-464f-9af7-70f300b48267-config\") pod \"dnsmasq-dns-559648544f-cwdch\" (UID: \"58298af3-1f5e-464f-9af7-70f300b48267\") " pod="openstack/dnsmasq-dns-559648544f-cwdch" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.467182 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bccbb886f-mstqs"] Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.482009 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-668dp" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.498892 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d9656c78f-bv48c"] Jan 26 13:16:28 crc kubenswrapper[4844]: E0126 13:16:28.499162 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7168ed0-bfcd-4904-8708-4ab86671ebcc" containerName="extract-utilities" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.499173 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7168ed0-bfcd-4904-8708-4ab86671ebcc" containerName="extract-utilities" Jan 26 13:16:28 crc kubenswrapper[4844]: E0126 13:16:28.499187 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7168ed0-bfcd-4904-8708-4ab86671ebcc" containerName="registry-server" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.499193 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7168ed0-bfcd-4904-8708-4ab86671ebcc" containerName="registry-server" Jan 26 13:16:28 crc kubenswrapper[4844]: E0126 13:16:28.499207 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7168ed0-bfcd-4904-8708-4ab86671ebcc" containerName="extract-content" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.499213 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7168ed0-bfcd-4904-8708-4ab86671ebcc" containerName="extract-content" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.499369 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7168ed0-bfcd-4904-8708-4ab86671ebcc" containerName="registry-server" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.514616 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7168ed0-bfcd-4904-8708-4ab86671ebcc-utilities\") pod \"c7168ed0-bfcd-4904-8708-4ab86671ebcc\" (UID: \"c7168ed0-bfcd-4904-8708-4ab86671ebcc\") " Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.514677 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7168ed0-bfcd-4904-8708-4ab86671ebcc-catalog-content\") pod \"c7168ed0-bfcd-4904-8708-4ab86671ebcc\" (UID: \"c7168ed0-bfcd-4904-8708-4ab86671ebcc\") " Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.514708 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnqcq\" (UniqueName: \"kubernetes.io/projected/c7168ed0-bfcd-4904-8708-4ab86671ebcc-kube-api-access-cnqcq\") pod \"c7168ed0-bfcd-4904-8708-4ab86671ebcc\" (UID: \"c7168ed0-bfcd-4904-8708-4ab86671ebcc\") " Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.514804 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58298af3-1f5e-464f-9af7-70f300b48267-dns-svc\") pod \"dnsmasq-dns-559648544f-cwdch\" (UID: \"58298af3-1f5e-464f-9af7-70f300b48267\") " pod="openstack/dnsmasq-dns-559648544f-cwdch" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.514859 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqh8z\" (UniqueName: \"kubernetes.io/projected/58298af3-1f5e-464f-9af7-70f300b48267-kube-api-access-qqh8z\") pod \"dnsmasq-dns-559648544f-cwdch\" (UID: \"58298af3-1f5e-464f-9af7-70f300b48267\") " pod="openstack/dnsmasq-dns-559648544f-cwdch" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.514878 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58298af3-1f5e-464f-9af7-70f300b48267-config\") pod \"dnsmasq-dns-559648544f-cwdch\" (UID: \"58298af3-1f5e-464f-9af7-70f300b48267\") " pod="openstack/dnsmasq-dns-559648544f-cwdch" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.516040 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58298af3-1f5e-464f-9af7-70f300b48267-config\") pod \"dnsmasq-dns-559648544f-cwdch\" (UID: \"58298af3-1f5e-464f-9af7-70f300b48267\") " pod="openstack/dnsmasq-dns-559648544f-cwdch" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.516349 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58298af3-1f5e-464f-9af7-70f300b48267-dns-svc\") pod \"dnsmasq-dns-559648544f-cwdch\" (UID: \"58298af3-1f5e-464f-9af7-70f300b48267\") " pod="openstack/dnsmasq-dns-559648544f-cwdch" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.530340 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7168ed0-bfcd-4904-8708-4ab86671ebcc-utilities" (OuterVolumeSpecName: "utilities") pod "c7168ed0-bfcd-4904-8708-4ab86671ebcc" (UID: "c7168ed0-bfcd-4904-8708-4ab86671ebcc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.539003 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7168ed0-bfcd-4904-8708-4ab86671ebcc-kube-api-access-cnqcq" (OuterVolumeSpecName: "kube-api-access-cnqcq") pod "c7168ed0-bfcd-4904-8708-4ab86671ebcc" (UID: "c7168ed0-bfcd-4904-8708-4ab86671ebcc"). InnerVolumeSpecName "kube-api-access-cnqcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.549297 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d9656c78f-bv48c"] Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.549403 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d9656c78f-bv48c" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.550750 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqh8z\" (UniqueName: \"kubernetes.io/projected/58298af3-1f5e-464f-9af7-70f300b48267-kube-api-access-qqh8z\") pod \"dnsmasq-dns-559648544f-cwdch\" (UID: \"58298af3-1f5e-464f-9af7-70f300b48267\") " pod="openstack/dnsmasq-dns-559648544f-cwdch" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.621414 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7168ed0-bfcd-4904-8708-4ab86671ebcc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7168ed0-bfcd-4904-8708-4ab86671ebcc" (UID: "c7168ed0-bfcd-4904-8708-4ab86671ebcc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.622082 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/149ed01d-9763-4c6d-b17f-79b6e76b110f-config\") pod \"dnsmasq-dns-6d9656c78f-bv48c\" (UID: \"149ed01d-9763-4c6d-b17f-79b6e76b110f\") " pod="openstack/dnsmasq-dns-6d9656c78f-bv48c" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.622161 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfljv\" (UniqueName: \"kubernetes.io/projected/149ed01d-9763-4c6d-b17f-79b6e76b110f-kube-api-access-vfljv\") pod \"dnsmasq-dns-6d9656c78f-bv48c\" (UID: \"149ed01d-9763-4c6d-b17f-79b6e76b110f\") " pod="openstack/dnsmasq-dns-6d9656c78f-bv48c" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.622220 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/149ed01d-9763-4c6d-b17f-79b6e76b110f-dns-svc\") pod \"dnsmasq-dns-6d9656c78f-bv48c\" (UID: \"149ed01d-9763-4c6d-b17f-79b6e76b110f\") " pod="openstack/dnsmasq-dns-6d9656c78f-bv48c" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.622395 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7168ed0-bfcd-4904-8708-4ab86671ebcc-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.622410 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7168ed0-bfcd-4904-8708-4ab86671ebcc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.622420 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnqcq\" (UniqueName: \"kubernetes.io/projected/c7168ed0-bfcd-4904-8708-4ab86671ebcc-kube-api-access-cnqcq\") on node \"crc\" DevicePath \"\"" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.624514 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559648544f-cwdch" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.726332 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfljv\" (UniqueName: \"kubernetes.io/projected/149ed01d-9763-4c6d-b17f-79b6e76b110f-kube-api-access-vfljv\") pod \"dnsmasq-dns-6d9656c78f-bv48c\" (UID: \"149ed01d-9763-4c6d-b17f-79b6e76b110f\") " pod="openstack/dnsmasq-dns-6d9656c78f-bv48c" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.726391 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/149ed01d-9763-4c6d-b17f-79b6e76b110f-dns-svc\") pod \"dnsmasq-dns-6d9656c78f-bv48c\" (UID: \"149ed01d-9763-4c6d-b17f-79b6e76b110f\") " pod="openstack/dnsmasq-dns-6d9656c78f-bv48c" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.726454 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/149ed01d-9763-4c6d-b17f-79b6e76b110f-config\") pod \"dnsmasq-dns-6d9656c78f-bv48c\" (UID: \"149ed01d-9763-4c6d-b17f-79b6e76b110f\") " pod="openstack/dnsmasq-dns-6d9656c78f-bv48c" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.727236 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/149ed01d-9763-4c6d-b17f-79b6e76b110f-config\") pod \"dnsmasq-dns-6d9656c78f-bv48c\" (UID: \"149ed01d-9763-4c6d-b17f-79b6e76b110f\") " pod="openstack/dnsmasq-dns-6d9656c78f-bv48c" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.727983 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/149ed01d-9763-4c6d-b17f-79b6e76b110f-dns-svc\") pod \"dnsmasq-dns-6d9656c78f-bv48c\" (UID: \"149ed01d-9763-4c6d-b17f-79b6e76b110f\") " pod="openstack/dnsmasq-dns-6d9656c78f-bv48c" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.750849 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfljv\" (UniqueName: \"kubernetes.io/projected/149ed01d-9763-4c6d-b17f-79b6e76b110f-kube-api-access-vfljv\") pod \"dnsmasq-dns-6d9656c78f-bv48c\" (UID: \"149ed01d-9763-4c6d-b17f-79b6e76b110f\") " pod="openstack/dnsmasq-dns-6d9656c78f-bv48c" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.847149 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bccbb886f-mstqs"] Jan 26 13:16:28 crc kubenswrapper[4844]: W0126 13:16:28.877151 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35ee2046_1d54_4ff1_a512_060c6c8ad0a3.slice/crio-6905b2281da11493ad02c2ca2b173bc9dcd71916fd676f45cb8f312ac28c91e5 WatchSource:0}: Error finding container 6905b2281da11493ad02c2ca2b173bc9dcd71916fd676f45cb8f312ac28c91e5: Status 404 returned error can't find the container with id 6905b2281da11493ad02c2ca2b173bc9dcd71916fd676f45cb8f312ac28c91e5 Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.882172 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d9656c78f-bv48c" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.893875 4844 generic.go:334] "Generic (PLEG): container finished" podID="c7168ed0-bfcd-4904-8708-4ab86671ebcc" containerID="e2ef081259d65d0c797a38b066cee2af793e4fcfb487d0c5415d54db072b05b0" exitCode=0 Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.894667 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-668dp" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.895704 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-668dp" event={"ID":"c7168ed0-bfcd-4904-8708-4ab86671ebcc","Type":"ContainerDied","Data":"e2ef081259d65d0c797a38b066cee2af793e4fcfb487d0c5415d54db072b05b0"} Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.895759 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-668dp" event={"ID":"c7168ed0-bfcd-4904-8708-4ab86671ebcc","Type":"ContainerDied","Data":"bf73745db7b7d06a0c319274c546e57d0bf6277265bddd87f5a34b25acfd82ce"} Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.895779 4844 scope.go:117] "RemoveContainer" containerID="e2ef081259d65d0c797a38b066cee2af793e4fcfb487d0c5415d54db072b05b0" Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.937672 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-668dp"] Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.939433 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-668dp"] Jan 26 13:16:28 crc kubenswrapper[4844]: I0126 13:16:28.953733 4844 scope.go:117] "RemoveContainer" containerID="16eef6f4f314fff596ace4a1a75ed812f752feb4c2a9a6d055290cabc52750d5" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.004450 4844 scope.go:117] "RemoveContainer" containerID="5ef36018ce036cb1ff761655f82e8ba58088c557727e7ae3f1ee2a475d418203" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.007937 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.009419 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.016569 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.023847 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9fhfr" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.024002 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4hbj2" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.029699 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.030011 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.030167 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.030302 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.030426 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.030522 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.056314 4844 scope.go:117] "RemoveContainer" containerID="e2ef081259d65d0c797a38b066cee2af793e4fcfb487d0c5415d54db072b05b0" Jan 26 13:16:29 crc kubenswrapper[4844]: E0126 13:16:29.057776 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2ef081259d65d0c797a38b066cee2af793e4fcfb487d0c5415d54db072b05b0\": container with ID starting with e2ef081259d65d0c797a38b066cee2af793e4fcfb487d0c5415d54db072b05b0 not found: ID does not exist" containerID="e2ef081259d65d0c797a38b066cee2af793e4fcfb487d0c5415d54db072b05b0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.057810 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2ef081259d65d0c797a38b066cee2af793e4fcfb487d0c5415d54db072b05b0"} err="failed to get container status \"e2ef081259d65d0c797a38b066cee2af793e4fcfb487d0c5415d54db072b05b0\": rpc error: code = NotFound desc = could not find container \"e2ef081259d65d0c797a38b066cee2af793e4fcfb487d0c5415d54db072b05b0\": container with ID starting with e2ef081259d65d0c797a38b066cee2af793e4fcfb487d0c5415d54db072b05b0 not found: ID does not exist" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.057830 4844 scope.go:117] "RemoveContainer" containerID="16eef6f4f314fff596ace4a1a75ed812f752feb4c2a9a6d055290cabc52750d5" Jan 26 13:16:29 crc kubenswrapper[4844]: E0126 13:16:29.058219 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16eef6f4f314fff596ace4a1a75ed812f752feb4c2a9a6d055290cabc52750d5\": container with ID starting with 16eef6f4f314fff596ace4a1a75ed812f752feb4c2a9a6d055290cabc52750d5 not found: ID does not exist" containerID="16eef6f4f314fff596ace4a1a75ed812f752feb4c2a9a6d055290cabc52750d5" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.058247 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16eef6f4f314fff596ace4a1a75ed812f752feb4c2a9a6d055290cabc52750d5"} err="failed to get container status \"16eef6f4f314fff596ace4a1a75ed812f752feb4c2a9a6d055290cabc52750d5\": rpc error: code = NotFound desc = could not find container \"16eef6f4f314fff596ace4a1a75ed812f752feb4c2a9a6d055290cabc52750d5\": container with ID starting with 16eef6f4f314fff596ace4a1a75ed812f752feb4c2a9a6d055290cabc52750d5 not found: ID does not exist" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.058262 4844 scope.go:117] "RemoveContainer" containerID="5ef36018ce036cb1ff761655f82e8ba58088c557727e7ae3f1ee2a475d418203" Jan 26 13:16:29 crc kubenswrapper[4844]: E0126 13:16:29.058524 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ef36018ce036cb1ff761655f82e8ba58088c557727e7ae3f1ee2a475d418203\": container with ID starting with 5ef36018ce036cb1ff761655f82e8ba58088c557727e7ae3f1ee2a475d418203 not found: ID does not exist" containerID="5ef36018ce036cb1ff761655f82e8ba58088c557727e7ae3f1ee2a475d418203" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.058547 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ef36018ce036cb1ff761655f82e8ba58088c557727e7ae3f1ee2a475d418203"} err="failed to get container status \"5ef36018ce036cb1ff761655f82e8ba58088c557727e7ae3f1ee2a475d418203\": rpc error: code = NotFound desc = could not find container \"5ef36018ce036cb1ff761655f82e8ba58088c557727e7ae3f1ee2a475d418203\": container with ID starting with 5ef36018ce036cb1ff761655f82e8ba58088c557727e7ae3f1ee2a475d418203 not found: ID does not exist" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.136820 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e48f1161-14d0-42c1-b6ac-bdb8bce26985-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.136873 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.136901 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e48f1161-14d0-42c1-b6ac-bdb8bce26985-config-data\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.136939 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e48f1161-14d0-42c1-b6ac-bdb8bce26985-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.136968 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.136990 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e48f1161-14d0-42c1-b6ac-bdb8bce26985-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.137004 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.137029 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4726\" (UniqueName: \"kubernetes.io/projected/e48f1161-14d0-42c1-b6ac-bdb8bce26985-kube-api-access-l4726\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.137050 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.137065 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e48f1161-14d0-42c1-b6ac-bdb8bce26985-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.137084 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.238809 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e48f1161-14d0-42c1-b6ac-bdb8bce26985-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.238862 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.238944 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4726\" (UniqueName: \"kubernetes.io/projected/e48f1161-14d0-42c1-b6ac-bdb8bce26985-kube-api-access-l4726\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.238982 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.239007 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e48f1161-14d0-42c1-b6ac-bdb8bce26985-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.239033 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.239072 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e48f1161-14d0-42c1-b6ac-bdb8bce26985-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.239109 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.239147 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e48f1161-14d0-42c1-b6ac-bdb8bce26985-config-data\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.239205 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e48f1161-14d0-42c1-b6ac-bdb8bce26985-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.239245 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.241567 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.242410 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e48f1161-14d0-42c1-b6ac-bdb8bce26985-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.243434 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e48f1161-14d0-42c1-b6ac-bdb8bce26985-config-data\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.243671 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.244066 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.244267 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.245332 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e48f1161-14d0-42c1-b6ac-bdb8bce26985-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.245339 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e48f1161-14d0-42c1-b6ac-bdb8bce26985-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.249564 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e48f1161-14d0-42c1-b6ac-bdb8bce26985-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.249734 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.262650 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4726\" (UniqueName: \"kubernetes.io/projected/e48f1161-14d0-42c1-b6ac-bdb8bce26985-kube-api-access-l4726\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.283310 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.339833 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7168ed0-bfcd-4904-8708-4ab86671ebcc" path="/var/lib/kubelet/pods/c7168ed0-bfcd-4904-8708-4ab86671ebcc/volumes" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.340606 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-559648544f-cwdch"] Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.340667 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.346021 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.346111 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.348904 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.351925 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.352536 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.352565 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.352697 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.352786 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qdtbn" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.352897 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.370817 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.443545 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.443613 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.443764 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e8e36a62-9367-4c94-9aff-de8e6166af27-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.443803 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.443884 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.443933 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xffks\" (UniqueName: \"kubernetes.io/projected/e8e36a62-9367-4c94-9aff-de8e6166af27-kube-api-access-xffks\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.443977 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e8e36a62-9367-4c94-9aff-de8e6166af27-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.444009 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e8e36a62-9367-4c94-9aff-de8e6166af27-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.444093 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e8e36a62-9367-4c94-9aff-de8e6166af27-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.444148 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8e36a62-9367-4c94-9aff-de8e6166af27-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.444184 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.477126 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d9656c78f-bv48c"] Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.545123 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xffks\" (UniqueName: \"kubernetes.io/projected/e8e36a62-9367-4c94-9aff-de8e6166af27-kube-api-access-xffks\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.545173 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e8e36a62-9367-4c94-9aff-de8e6166af27-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.545194 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e8e36a62-9367-4c94-9aff-de8e6166af27-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.545234 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e8e36a62-9367-4c94-9aff-de8e6166af27-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.545262 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8e36a62-9367-4c94-9aff-de8e6166af27-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.545280 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.545302 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.545322 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.545350 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e8e36a62-9367-4c94-9aff-de8e6166af27-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.545366 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.545431 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.545697 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.547193 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.547467 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.547573 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e8e36a62-9367-4c94-9aff-de8e6166af27-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.548622 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8e36a62-9367-4c94-9aff-de8e6166af27-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.550277 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e8e36a62-9367-4c94-9aff-de8e6166af27-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.551719 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.552342 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e8e36a62-9367-4c94-9aff-de8e6166af27-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.552509 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.575557 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e8e36a62-9367-4c94-9aff-de8e6166af27-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.588935 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xffks\" (UniqueName: \"kubernetes.io/projected/e8e36a62-9367-4c94-9aff-de8e6166af27-kube-api-access-xffks\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.597965 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.633902 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.635266 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.637417 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.637746 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.639688 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-mxsmb" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.639821 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.639889 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.639850 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.640902 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.645299 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.674130 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.749011 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/185637e1-efed-452c-ba52-7688909bad2c-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.749402 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/185637e1-efed-452c-ba52-7688909bad2c-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.749464 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/185637e1-efed-452c-ba52-7688909bad2c-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.749535 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.749630 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/185637e1-efed-452c-ba52-7688909bad2c-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.749700 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/185637e1-efed-452c-ba52-7688909bad2c-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.749729 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9bbr\" (UniqueName: \"kubernetes.io/projected/185637e1-efed-452c-ba52-7688909bad2c-kube-api-access-c9bbr\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.749779 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/185637e1-efed-452c-ba52-7688909bad2c-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.749949 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/185637e1-efed-452c-ba52-7688909bad2c-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.750038 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/185637e1-efed-452c-ba52-7688909bad2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.750150 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/185637e1-efed-452c-ba52-7688909bad2c-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.840254 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.853712 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/185637e1-efed-452c-ba52-7688909bad2c-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.853831 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/185637e1-efed-452c-ba52-7688909bad2c-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.853855 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/185637e1-efed-452c-ba52-7688909bad2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.853885 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/185637e1-efed-452c-ba52-7688909bad2c-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.853929 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/185637e1-efed-452c-ba52-7688909bad2c-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.853951 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/185637e1-efed-452c-ba52-7688909bad2c-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.853993 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/185637e1-efed-452c-ba52-7688909bad2c-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.854013 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.854034 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/185637e1-efed-452c-ba52-7688909bad2c-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.854061 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/185637e1-efed-452c-ba52-7688909bad2c-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.854076 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9bbr\" (UniqueName: \"kubernetes.io/projected/185637e1-efed-452c-ba52-7688909bad2c-kube-api-access-c9bbr\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.857808 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/185637e1-efed-452c-ba52-7688909bad2c-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.858038 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/185637e1-efed-452c-ba52-7688909bad2c-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.858089 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/185637e1-efed-452c-ba52-7688909bad2c-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.858276 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.858913 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/185637e1-efed-452c-ba52-7688909bad2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.862535 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/185637e1-efed-452c-ba52-7688909bad2c-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.866536 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/185637e1-efed-452c-ba52-7688909bad2c-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.869046 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/185637e1-efed-452c-ba52-7688909bad2c-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.871379 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/185637e1-efed-452c-ba52-7688909bad2c-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.879408 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/185637e1-efed-452c-ba52-7688909bad2c-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.885541 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9bbr\" (UniqueName: \"kubernetes.io/projected/185637e1-efed-452c-ba52-7688909bad2c-kube-api-access-c9bbr\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.908426 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-559648544f-cwdch" event={"ID":"58298af3-1f5e-464f-9af7-70f300b48267","Type":"ContainerStarted","Data":"b311d993e6e5078b9daf68ce04927a63a13f727f71025337a5334ceb86a88d8d"} Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.908480 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"185637e1-efed-452c-ba52-7688909bad2c\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.910139 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bccbb886f-mstqs" event={"ID":"35ee2046-1d54-4ff1-a512-060c6c8ad0a3","Type":"ContainerStarted","Data":"6905b2281da11493ad02c2ca2b173bc9dcd71916fd676f45cb8f312ac28c91e5"} Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.917971 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d9656c78f-bv48c" event={"ID":"149ed01d-9763-4c6d-b17f-79b6e76b110f","Type":"ContainerStarted","Data":"bb56dcb2c38f05a6ae8b3a58557b93f500a8acbee264f82b454c1c02c885054d"} Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.920091 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e48f1161-14d0-42c1-b6ac-bdb8bce26985","Type":"ContainerStarted","Data":"f038c4bfb9b42aa2adb867b5ff99cb4b7376dfdced5df30a83c1787eabed4214"} Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.923663 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9fhfr"] Jan 26 13:16:29 crc kubenswrapper[4844]: I0126 13:16:29.969803 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:16:30 crc kubenswrapper[4844]: I0126 13:16:30.186159 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 13:16:30 crc kubenswrapper[4844]: I0126 13:16:30.542468 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 26 13:16:30 crc kubenswrapper[4844]: W0126 13:16:30.550659 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod185637e1_efed_452c_ba52_7688909bad2c.slice/crio-88019874e2ebc26ddb0a7fb749dcdb91e6356a89e8852488e9779e1a30943c7b WatchSource:0}: Error finding container 88019874e2ebc26ddb0a7fb749dcdb91e6356a89e8852488e9779e1a30943c7b: Status 404 returned error can't find the container with id 88019874e2ebc26ddb0a7fb749dcdb91e6356a89e8852488e9779e1a30943c7b Jan 26 13:16:30 crc kubenswrapper[4844]: I0126 13:16:30.929831 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e8e36a62-9367-4c94-9aff-de8e6166af27","Type":"ContainerStarted","Data":"c6174e2ee6e8cf26deebd5aa8da5645beddd300ab6400a0ea5227615a329e3a1"} Jan 26 13:16:30 crc kubenswrapper[4844]: I0126 13:16:30.932253 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"185637e1-efed-452c-ba52-7688909bad2c","Type":"ContainerStarted","Data":"88019874e2ebc26ddb0a7fb749dcdb91e6356a89e8852488e9779e1a30943c7b"} Jan 26 13:16:30 crc kubenswrapper[4844]: I0126 13:16:30.932771 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9fhfr" podUID="1a72d1a1-fc4b-451c-95f5-fe163e63e95d" containerName="registry-server" containerID="cri-o://af747ed94b42d1607943eb878b8baba218784aef58c779ca40a277e5e6282acd" gracePeriod=2 Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.131263 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.145075 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.145206 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.151099 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.151365 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.151568 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.151725 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-x4gkw" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.154731 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.281250 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e22ff40-cacd-405d-98f5-f603b17b4e4a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.281317 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7e22ff40-cacd-405d-98f5-f603b17b4e4a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.281420 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b56bd\" (UniqueName: \"kubernetes.io/projected/7e22ff40-cacd-405d-98f5-f603b17b4e4a-kube-api-access-b56bd\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.281453 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7e22ff40-cacd-405d-98f5-f603b17b4e4a-kolla-config\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.281468 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e22ff40-cacd-405d-98f5-f603b17b4e4a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.281488 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.281505 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7e22ff40-cacd-405d-98f5-f603b17b4e4a-config-data-default\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.281807 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e22ff40-cacd-405d-98f5-f603b17b4e4a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.389457 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7e22ff40-cacd-405d-98f5-f603b17b4e4a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.390864 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b56bd\" (UniqueName: \"kubernetes.io/projected/7e22ff40-cacd-405d-98f5-f603b17b4e4a-kube-api-access-b56bd\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.390944 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7e22ff40-cacd-405d-98f5-f603b17b4e4a-kolla-config\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.390996 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e22ff40-cacd-405d-98f5-f603b17b4e4a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.391055 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.391081 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7e22ff40-cacd-405d-98f5-f603b17b4e4a-config-data-default\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.391159 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e22ff40-cacd-405d-98f5-f603b17b4e4a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.392226 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e22ff40-cacd-405d-98f5-f603b17b4e4a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.394232 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7e22ff40-cacd-405d-98f5-f603b17b4e4a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.395215 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7e22ff40-cacd-405d-98f5-f603b17b4e4a-config-data-default\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.395972 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7e22ff40-cacd-405d-98f5-f603b17b4e4a-kolla-config\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.399531 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.405624 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e22ff40-cacd-405d-98f5-f603b17b4e4a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.406107 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e22ff40-cacd-405d-98f5-f603b17b4e4a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.407804 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e22ff40-cacd-405d-98f5-f603b17b4e4a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.423349 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b56bd\" (UniqueName: \"kubernetes.io/projected/7e22ff40-cacd-405d-98f5-f603b17b4e4a-kube-api-access-b56bd\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.510757 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"7e22ff40-cacd-405d-98f5-f603b17b4e4a\") " pod="openstack/openstack-galera-0" Jan 26 13:16:31 crc kubenswrapper[4844]: I0126 13:16:31.782270 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.009574 4844 generic.go:334] "Generic (PLEG): container finished" podID="1a72d1a1-fc4b-451c-95f5-fe163e63e95d" containerID="af747ed94b42d1607943eb878b8baba218784aef58c779ca40a277e5e6282acd" exitCode=0 Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.009665 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fhfr" event={"ID":"1a72d1a1-fc4b-451c-95f5-fe163e63e95d","Type":"ContainerDied","Data":"af747ed94b42d1607943eb878b8baba218784aef58c779ca40a277e5e6282acd"} Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.492417 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.495292 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.498819 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.498969 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.499079 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.499146 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-8kjfz" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.507633 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.622097 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f80a52fc-df6a-4218-913e-2ee03174e341-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.622152 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.622219 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f80a52fc-df6a-4218-913e-2ee03174e341-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.622253 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2fms\" (UniqueName: \"kubernetes.io/projected/f80a52fc-df6a-4218-913e-2ee03174e341-kube-api-access-p2fms\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.622297 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f80a52fc-df6a-4218-913e-2ee03174e341-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.622324 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f80a52fc-df6a-4218-913e-2ee03174e341-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.622372 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f80a52fc-df6a-4218-913e-2ee03174e341-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.622398 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f80a52fc-df6a-4218-913e-2ee03174e341-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.681640 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.687451 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.687787 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.693693 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-b2m8v" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.694237 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.694403 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.723442 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f80a52fc-df6a-4218-913e-2ee03174e341-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.723486 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2fms\" (UniqueName: \"kubernetes.io/projected/f80a52fc-df6a-4218-913e-2ee03174e341-kube-api-access-p2fms\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.723519 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bd5019-39c7-4b78-8610-4a7db01f5a85-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f2bd5019-39c7-4b78-8610-4a7db01f5a85\") " pod="openstack/memcached-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.723548 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f80a52fc-df6a-4218-913e-2ee03174e341-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.723572 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f80a52fc-df6a-4218-913e-2ee03174e341-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.723594 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrzql\" (UniqueName: \"kubernetes.io/projected/f2bd5019-39c7-4b78-8610-4a7db01f5a85-kube-api-access-rrzql\") pod \"memcached-0\" (UID: \"f2bd5019-39c7-4b78-8610-4a7db01f5a85\") " pod="openstack/memcached-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.723631 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2bd5019-39c7-4b78-8610-4a7db01f5a85-config-data\") pod \"memcached-0\" (UID: \"f2bd5019-39c7-4b78-8610-4a7db01f5a85\") " pod="openstack/memcached-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.723651 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f80a52fc-df6a-4218-913e-2ee03174e341-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.723675 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f80a52fc-df6a-4218-913e-2ee03174e341-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.723705 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bd5019-39c7-4b78-8610-4a7db01f5a85-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f2bd5019-39c7-4b78-8610-4a7db01f5a85\") " pod="openstack/memcached-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.723734 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f80a52fc-df6a-4218-913e-2ee03174e341-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.723758 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.723779 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2bd5019-39c7-4b78-8610-4a7db01f5a85-kolla-config\") pod \"memcached-0\" (UID: \"f2bd5019-39c7-4b78-8610-4a7db01f5a85\") " pod="openstack/memcached-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.724211 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f80a52fc-df6a-4218-913e-2ee03174e341-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.724233 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.724857 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f80a52fc-df6a-4218-913e-2ee03174e341-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.725083 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f80a52fc-df6a-4218-913e-2ee03174e341-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.738583 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f80a52fc-df6a-4218-913e-2ee03174e341-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.745601 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f80a52fc-df6a-4218-913e-2ee03174e341-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.749990 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2fms\" (UniqueName: \"kubernetes.io/projected/f80a52fc-df6a-4218-913e-2ee03174e341-kube-api-access-p2fms\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.753216 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f80a52fc-df6a-4218-913e-2ee03174e341-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.783713 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f80a52fc-df6a-4218-913e-2ee03174e341\") " pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.825287 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.825605 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrzql\" (UniqueName: \"kubernetes.io/projected/f2bd5019-39c7-4b78-8610-4a7db01f5a85-kube-api-access-rrzql\") pod \"memcached-0\" (UID: \"f2bd5019-39c7-4b78-8610-4a7db01f5a85\") " pod="openstack/memcached-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.825666 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2bd5019-39c7-4b78-8610-4a7db01f5a85-config-data\") pod \"memcached-0\" (UID: \"f2bd5019-39c7-4b78-8610-4a7db01f5a85\") " pod="openstack/memcached-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.825712 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bd5019-39c7-4b78-8610-4a7db01f5a85-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f2bd5019-39c7-4b78-8610-4a7db01f5a85\") " pod="openstack/memcached-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.825771 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2bd5019-39c7-4b78-8610-4a7db01f5a85-kolla-config\") pod \"memcached-0\" (UID: \"f2bd5019-39c7-4b78-8610-4a7db01f5a85\") " pod="openstack/memcached-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.825812 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bd5019-39c7-4b78-8610-4a7db01f5a85-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f2bd5019-39c7-4b78-8610-4a7db01f5a85\") " pod="openstack/memcached-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.826528 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2bd5019-39c7-4b78-8610-4a7db01f5a85-config-data\") pod \"memcached-0\" (UID: \"f2bd5019-39c7-4b78-8610-4a7db01f5a85\") " pod="openstack/memcached-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.826998 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2bd5019-39c7-4b78-8610-4a7db01f5a85-kolla-config\") pod \"memcached-0\" (UID: \"f2bd5019-39c7-4b78-8610-4a7db01f5a85\") " pod="openstack/memcached-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.835136 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bd5019-39c7-4b78-8610-4a7db01f5a85-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f2bd5019-39c7-4b78-8610-4a7db01f5a85\") " pod="openstack/memcached-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.842320 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrzql\" (UniqueName: \"kubernetes.io/projected/f2bd5019-39c7-4b78-8610-4a7db01f5a85-kube-api-access-rrzql\") pod \"memcached-0\" (UID: \"f2bd5019-39c7-4b78-8610-4a7db01f5a85\") " pod="openstack/memcached-0" Jan 26 13:16:32 crc kubenswrapper[4844]: I0126 13:16:32.843499 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bd5019-39c7-4b78-8610-4a7db01f5a85-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f2bd5019-39c7-4b78-8610-4a7db01f5a85\") " pod="openstack/memcached-0" Jan 26 13:16:33 crc kubenswrapper[4844]: I0126 13:16:33.020967 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 13:16:34 crc kubenswrapper[4844]: I0126 13:16:34.719044 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 13:16:34 crc kubenswrapper[4844]: I0126 13:16:34.720334 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 13:16:34 crc kubenswrapper[4844]: I0126 13:16:34.723634 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-75jcr" Jan 26 13:16:34 crc kubenswrapper[4844]: I0126 13:16:34.738220 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 13:16:34 crc kubenswrapper[4844]: I0126 13:16:34.773364 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwhrk\" (UniqueName: \"kubernetes.io/projected/88528049-6527-4f6d-b28f-9a7ca4d46cf8-kube-api-access-hwhrk\") pod \"kube-state-metrics-0\" (UID: \"88528049-6527-4f6d-b28f-9a7ca4d46cf8\") " pod="openstack/kube-state-metrics-0" Jan 26 13:16:34 crc kubenswrapper[4844]: I0126 13:16:34.874689 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwhrk\" (UniqueName: \"kubernetes.io/projected/88528049-6527-4f6d-b28f-9a7ca4d46cf8-kube-api-access-hwhrk\") pod \"kube-state-metrics-0\" (UID: \"88528049-6527-4f6d-b28f-9a7ca4d46cf8\") " pod="openstack/kube-state-metrics-0" Jan 26 13:16:34 crc kubenswrapper[4844]: I0126 13:16:34.900910 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwhrk\" (UniqueName: \"kubernetes.io/projected/88528049-6527-4f6d-b28f-9a7ca4d46cf8-kube-api-access-hwhrk\") pod \"kube-state-metrics-0\" (UID: \"88528049-6527-4f6d-b28f-9a7ca4d46cf8\") " pod="openstack/kube-state-metrics-0" Jan 26 13:16:35 crc kubenswrapper[4844]: I0126 13:16:35.063316 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.043200 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.045639 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.048472 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.048651 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.048666 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.048814 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.048927 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-lh4xm" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.048958 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.061461 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.061684 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.048997 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.102230 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-config\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.102650 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.102685 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.102712 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.102744 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.102818 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkstg\" (UniqueName: \"kubernetes.io/projected/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-kube-api-access-gkstg\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.102843 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.102923 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.102964 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.102998 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.205551 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-config\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.205593 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.205632 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.205654 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.205671 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.205872 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkstg\" (UniqueName: \"kubernetes.io/projected/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-kube-api-access-gkstg\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.205967 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.206047 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.206092 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.206115 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.207142 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.207218 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.207274 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.211217 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.212073 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-config\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.212448 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.221058 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.221194 4844 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.221229 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/60456fde86fe7a040b59fc70316475c6486458b501f0e0cd47e77b114ad32f41/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.224731 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.229552 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkstg\" (UniqueName: \"kubernetes.io/projected/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-kube-api-access-gkstg\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.275571 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\") pod \"prometheus-metric-storage-0\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:36 crc kubenswrapper[4844]: I0126 13:16:36.409493 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 13:16:37 crc kubenswrapper[4844]: E0126 13:16:37.840943 4844 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af747ed94b42d1607943eb878b8baba218784aef58c779ca40a277e5e6282acd is running failed: container process not found" containerID="af747ed94b42d1607943eb878b8baba218784aef58c779ca40a277e5e6282acd" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 13:16:37 crc kubenswrapper[4844]: E0126 13:16:37.841407 4844 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af747ed94b42d1607943eb878b8baba218784aef58c779ca40a277e5e6282acd is running failed: container process not found" containerID="af747ed94b42d1607943eb878b8baba218784aef58c779ca40a277e5e6282acd" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 13:16:37 crc kubenswrapper[4844]: E0126 13:16:37.841705 4844 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af747ed94b42d1607943eb878b8baba218784aef58c779ca40a277e5e6282acd is running failed: container process not found" containerID="af747ed94b42d1607943eb878b8baba218784aef58c779ca40a277e5e6282acd" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 13:16:37 crc kubenswrapper[4844]: E0126 13:16:37.841728 4844 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af747ed94b42d1607943eb878b8baba218784aef58c779ca40a277e5e6282acd is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-9fhfr" podUID="1a72d1a1-fc4b-451c-95f5-fe163e63e95d" containerName="registry-server" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.109342 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vnff8"] Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.111034 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.113849 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.114119 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.114362 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pwv74" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.123841 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vnff8"] Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.133537 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-bq8zv"] Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.135442 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.157760 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bq8zv"] Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.262936 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6696649d-b30c-4ef9-beda-3cec75d656b4-var-log-ovn\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.262995 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e-scripts\") pod \"ovn-controller-ovs-bq8zv\" (UID: \"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e\") " pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.263062 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h65kl\" (UniqueName: \"kubernetes.io/projected/f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e-kube-api-access-h65kl\") pod \"ovn-controller-ovs-bq8zv\" (UID: \"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e\") " pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.263302 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6696649d-b30c-4ef9-beda-3cec75d656b4-scripts\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.263398 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6696649d-b30c-4ef9-beda-3cec75d656b4-var-run\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.263439 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e-etc-ovs\") pod \"ovn-controller-ovs-bq8zv\" (UID: \"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e\") " pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.263478 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6696649d-b30c-4ef9-beda-3cec75d656b4-ovn-controller-tls-certs\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.263574 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e-var-run\") pod \"ovn-controller-ovs-bq8zv\" (UID: \"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e\") " pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.263620 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e-var-lib\") pod \"ovn-controller-ovs-bq8zv\" (UID: \"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e\") " pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.263641 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fddjv\" (UniqueName: \"kubernetes.io/projected/6696649d-b30c-4ef9-beda-3cec75d656b4-kube-api-access-fddjv\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.263665 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e-var-log\") pod \"ovn-controller-ovs-bq8zv\" (UID: \"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e\") " pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.263693 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6696649d-b30c-4ef9-beda-3cec75d656b4-combined-ca-bundle\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.263894 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6696649d-b30c-4ef9-beda-3cec75d656b4-var-run-ovn\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.365589 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6696649d-b30c-4ef9-beda-3cec75d656b4-var-run-ovn\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.365755 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6696649d-b30c-4ef9-beda-3cec75d656b4-var-log-ovn\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.365793 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e-scripts\") pod \"ovn-controller-ovs-bq8zv\" (UID: \"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e\") " pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.365840 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h65kl\" (UniqueName: \"kubernetes.io/projected/f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e-kube-api-access-h65kl\") pod \"ovn-controller-ovs-bq8zv\" (UID: \"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e\") " pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.365877 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6696649d-b30c-4ef9-beda-3cec75d656b4-scripts\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.365907 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6696649d-b30c-4ef9-beda-3cec75d656b4-var-run\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.365937 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e-etc-ovs\") pod \"ovn-controller-ovs-bq8zv\" (UID: \"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e\") " pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.365993 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6696649d-b30c-4ef9-beda-3cec75d656b4-ovn-controller-tls-certs\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.366034 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e-var-run\") pod \"ovn-controller-ovs-bq8zv\" (UID: \"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e\") " pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.366053 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e-var-lib\") pod \"ovn-controller-ovs-bq8zv\" (UID: \"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e\") " pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.366079 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fddjv\" (UniqueName: \"kubernetes.io/projected/6696649d-b30c-4ef9-beda-3cec75d656b4-kube-api-access-fddjv\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.366078 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6696649d-b30c-4ef9-beda-3cec75d656b4-var-run-ovn\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.366111 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e-var-log\") pod \"ovn-controller-ovs-bq8zv\" (UID: \"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e\") " pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.366171 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6696649d-b30c-4ef9-beda-3cec75d656b4-var-run\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.366229 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6696649d-b30c-4ef9-beda-3cec75d656b4-combined-ca-bundle\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.366211 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6696649d-b30c-4ef9-beda-3cec75d656b4-var-log-ovn\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.366271 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e-var-run\") pod \"ovn-controller-ovs-bq8zv\" (UID: \"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e\") " pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.366457 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e-etc-ovs\") pod \"ovn-controller-ovs-bq8zv\" (UID: \"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e\") " pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.366475 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e-var-log\") pod \"ovn-controller-ovs-bq8zv\" (UID: \"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e\") " pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.366546 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e-var-lib\") pod \"ovn-controller-ovs-bq8zv\" (UID: \"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e\") " pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.367842 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6696649d-b30c-4ef9-beda-3cec75d656b4-scripts\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.369569 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e-scripts\") pod \"ovn-controller-ovs-bq8zv\" (UID: \"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e\") " pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.371253 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6696649d-b30c-4ef9-beda-3cec75d656b4-combined-ca-bundle\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.375146 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6696649d-b30c-4ef9-beda-3cec75d656b4-ovn-controller-tls-certs\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.397523 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h65kl\" (UniqueName: \"kubernetes.io/projected/f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e-kube-api-access-h65kl\") pod \"ovn-controller-ovs-bq8zv\" (UID: \"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e\") " pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.399191 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fddjv\" (UniqueName: \"kubernetes.io/projected/6696649d-b30c-4ef9-beda-3cec75d656b4-kube-api-access-fddjv\") pod \"ovn-controller-vnff8\" (UID: \"6696649d-b30c-4ef9-beda-3cec75d656b4\") " pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.433440 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vnff8" Jan 26 13:16:39 crc kubenswrapper[4844]: I0126 13:16:39.460087 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:16:41 crc kubenswrapper[4844]: I0126 13:16:41.839774 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9fhfr" Jan 26 13:16:41 crc kubenswrapper[4844]: I0126 13:16:41.944704 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 13:16:41 crc kubenswrapper[4844]: E0126 13:16:41.945134 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a72d1a1-fc4b-451c-95f5-fe163e63e95d" containerName="extract-utilities" Jan 26 13:16:41 crc kubenswrapper[4844]: I0126 13:16:41.945152 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a72d1a1-fc4b-451c-95f5-fe163e63e95d" containerName="extract-utilities" Jan 26 13:16:41 crc kubenswrapper[4844]: E0126 13:16:41.945174 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a72d1a1-fc4b-451c-95f5-fe163e63e95d" containerName="extract-content" Jan 26 13:16:41 crc kubenswrapper[4844]: I0126 13:16:41.945185 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a72d1a1-fc4b-451c-95f5-fe163e63e95d" containerName="extract-content" Jan 26 13:16:41 crc kubenswrapper[4844]: E0126 13:16:41.945196 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a72d1a1-fc4b-451c-95f5-fe163e63e95d" containerName="registry-server" Jan 26 13:16:41 crc kubenswrapper[4844]: I0126 13:16:41.945204 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a72d1a1-fc4b-451c-95f5-fe163e63e95d" containerName="registry-server" Jan 26 13:16:41 crc kubenswrapper[4844]: I0126 13:16:41.945413 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a72d1a1-fc4b-451c-95f5-fe163e63e95d" containerName="registry-server" Jan 26 13:16:41 crc kubenswrapper[4844]: I0126 13:16:41.946583 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:41 crc kubenswrapper[4844]: I0126 13:16:41.949514 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 13:16:41 crc kubenswrapper[4844]: I0126 13:16:41.955058 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-p8zwb" Jan 26 13:16:41 crc kubenswrapper[4844]: I0126 13:16:41.955223 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 26 13:16:41 crc kubenswrapper[4844]: I0126 13:16:41.955312 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 26 13:16:41 crc kubenswrapper[4844]: I0126 13:16:41.955385 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 26 13:16:41 crc kubenswrapper[4844]: I0126 13:16:41.955424 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.010204 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtdgd\" (UniqueName: \"kubernetes.io/projected/1a72d1a1-fc4b-451c-95f5-fe163e63e95d-kube-api-access-xtdgd\") pod \"1a72d1a1-fc4b-451c-95f5-fe163e63e95d\" (UID: \"1a72d1a1-fc4b-451c-95f5-fe163e63e95d\") " Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.010295 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a72d1a1-fc4b-451c-95f5-fe163e63e95d-catalog-content\") pod \"1a72d1a1-fc4b-451c-95f5-fe163e63e95d\" (UID: \"1a72d1a1-fc4b-451c-95f5-fe163e63e95d\") " Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.010346 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a72d1a1-fc4b-451c-95f5-fe163e63e95d-utilities\") pod \"1a72d1a1-fc4b-451c-95f5-fe163e63e95d\" (UID: \"1a72d1a1-fc4b-451c-95f5-fe163e63e95d\") " Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.011730 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a72d1a1-fc4b-451c-95f5-fe163e63e95d-utilities" (OuterVolumeSpecName: "utilities") pod "1a72d1a1-fc4b-451c-95f5-fe163e63e95d" (UID: "1a72d1a1-fc4b-451c-95f5-fe163e63e95d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.025933 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a72d1a1-fc4b-451c-95f5-fe163e63e95d-kube-api-access-xtdgd" (OuterVolumeSpecName: "kube-api-access-xtdgd") pod "1a72d1a1-fc4b-451c-95f5-fe163e63e95d" (UID: "1a72d1a1-fc4b-451c-95f5-fe163e63e95d"). InnerVolumeSpecName "kube-api-access-xtdgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.073104 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a72d1a1-fc4b-451c-95f5-fe163e63e95d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a72d1a1-fc4b-451c-95f5-fe163e63e95d" (UID: "1a72d1a1-fc4b-451c-95f5-fe163e63e95d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.113062 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/490e8905-58e4-44a6-a4a4-ea873a5eaa94-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.113123 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/490e8905-58e4-44a6-a4a4-ea873a5eaa94-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.113295 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.113440 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/490e8905-58e4-44a6-a4a4-ea873a5eaa94-config\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.113482 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/490e8905-58e4-44a6-a4a4-ea873a5eaa94-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.113737 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/490e8905-58e4-44a6-a4a4-ea873a5eaa94-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.113787 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pgpb\" (UniqueName: \"kubernetes.io/projected/490e8905-58e4-44a6-a4a4-ea873a5eaa94-kube-api-access-9pgpb\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.113971 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/490e8905-58e4-44a6-a4a4-ea873a5eaa94-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.114243 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtdgd\" (UniqueName: \"kubernetes.io/projected/1a72d1a1-fc4b-451c-95f5-fe163e63e95d-kube-api-access-xtdgd\") on node \"crc\" DevicePath \"\"" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.114271 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a72d1a1-fc4b-451c-95f5-fe163e63e95d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.114289 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a72d1a1-fc4b-451c-95f5-fe163e63e95d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.122205 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.123567 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.125645 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.125849 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-w24fd" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.125976 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.127037 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.138207 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.188891 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fhfr" event={"ID":"1a72d1a1-fc4b-451c-95f5-fe163e63e95d","Type":"ContainerDied","Data":"686a867f65e8ca9a83b446c4040f85fa4e2223ff52420d328ed68fb421ecaa38"} Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.188950 4844 scope.go:117] "RemoveContainer" containerID="af747ed94b42d1607943eb878b8baba218784aef58c779ca40a277e5e6282acd" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.188965 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9fhfr" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.215376 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-config\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.215418 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.215461 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.216133 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.216167 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/490e8905-58e4-44a6-a4a4-ea873a5eaa94-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.216234 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/490e8905-58e4-44a6-a4a4-ea873a5eaa94-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.216446 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.216506 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.216527 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/490e8905-58e4-44a6-a4a4-ea873a5eaa94-config\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.216557 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74cnv\" (UniqueName: \"kubernetes.io/projected/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-kube-api-access-74cnv\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.216606 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/490e8905-58e4-44a6-a4a4-ea873a5eaa94-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.216641 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.216696 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/490e8905-58e4-44a6-a4a4-ea873a5eaa94-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.216753 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pgpb\" (UniqueName: \"kubernetes.io/projected/490e8905-58e4-44a6-a4a4-ea873a5eaa94-kube-api-access-9pgpb\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.216781 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/490e8905-58e4-44a6-a4a4-ea873a5eaa94-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.217067 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.217828 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.218488 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/490e8905-58e4-44a6-a4a4-ea873a5eaa94-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.219518 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/490e8905-58e4-44a6-a4a4-ea873a5eaa94-config\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.223755 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/490e8905-58e4-44a6-a4a4-ea873a5eaa94-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.223963 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/490e8905-58e4-44a6-a4a4-ea873a5eaa94-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.224064 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/490e8905-58e4-44a6-a4a4-ea873a5eaa94-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.225976 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9fhfr"] Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.230957 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/490e8905-58e4-44a6-a4a4-ea873a5eaa94-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.235545 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pgpb\" (UniqueName: \"kubernetes.io/projected/490e8905-58e4-44a6-a4a4-ea873a5eaa94-kube-api-access-9pgpb\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.236384 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9fhfr"] Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.239679 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"490e8905-58e4-44a6-a4a4-ea873a5eaa94\") " pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.297282 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.319418 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.319490 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.319516 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74cnv\" (UniqueName: \"kubernetes.io/projected/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-kube-api-access-74cnv\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.319553 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.319639 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.319694 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-config\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.319728 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.319777 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.320011 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.320127 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.322168 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.322374 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-config\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.323986 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.325474 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.332298 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.334616 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74cnv\" (UniqueName: \"kubernetes.io/projected/6b89a5fa-2181-432a-a613-6bbeeb0f56bb-kube-api-access-74cnv\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.339877 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6b89a5fa-2181-432a-a613-6bbeeb0f56bb\") " pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:42 crc kubenswrapper[4844]: I0126 13:16:42.441970 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 13:16:43 crc kubenswrapper[4844]: I0126 13:16:43.325876 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a72d1a1-fc4b-451c-95f5-fe163e63e95d" path="/var/lib/kubelet/pods/1a72d1a1-fc4b-451c-95f5-fe163e63e95d/volumes" Jan 26 13:16:55 crc kubenswrapper[4844]: E0126 13:16:55.776687 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 26 13:16:55 crc kubenswrapper[4844]: E0126 13:16:55.777065 4844 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 26 13:16:55 crc kubenswrapper[4844]: E0126 13:16:55.777181 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.9:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9bbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-notifications-server-0_openstack(185637e1-efed-452c-ba52-7688909bad2c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:16:55 crc kubenswrapper[4844]: E0126 13:16:55.778515 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-notifications-server-0" podUID="185637e1-efed-452c-ba52-7688909bad2c" Jan 26 13:16:55 crc kubenswrapper[4844]: E0126 13:16:55.790038 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-notifications-server-0" podUID="185637e1-efed-452c-ba52-7688909bad2c" Jan 26 13:16:55 crc kubenswrapper[4844]: I0126 13:16:55.800923 4844 scope.go:117] "RemoveContainer" containerID="74b0199c206325bb06fce5efe35539b492787372777c0876d6e6795662ede299" Jan 26 13:16:55 crc kubenswrapper[4844]: E0126 13:16:55.922632 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 26 13:16:55 crc kubenswrapper[4844]: E0126 13:16:55.922706 4844 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 26 13:16:55 crc kubenswrapper[4844]: E0126 13:16:55.922840 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.9:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4726,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(e48f1161-14d0-42c1-b6ac-bdb8bce26985): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:16:55 crc kubenswrapper[4844]: E0126 13:16:55.924067 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="e48f1161-14d0-42c1-b6ac-bdb8bce26985" Jan 26 13:16:55 crc kubenswrapper[4844]: E0126 13:16:55.992168 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 26 13:16:55 crc kubenswrapper[4844]: E0126 13:16:55.992237 4844 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 26 13:16:55 crc kubenswrapper[4844]: E0126 13:16:55.992399 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.9:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xffks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(e8e36a62-9367-4c94-9aff-de8e6166af27): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:16:55 crc kubenswrapper[4844]: E0126 13:16:55.993667 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="e8e36a62-9367-4c94-9aff-de8e6166af27" Jan 26 13:16:56 crc kubenswrapper[4844]: E0126 13:16:56.800420 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="e8e36a62-9367-4c94-9aff-de8e6166af27" Jan 26 13:16:56 crc kubenswrapper[4844]: E0126 13:16:56.800828 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-server-0" podUID="e48f1161-14d0-42c1-b6ac-bdb8bce26985" Jan 26 13:17:00 crc kubenswrapper[4844]: I0126 13:17:00.550794 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vnff8"] Jan 26 13:17:00 crc kubenswrapper[4844]: E0126 13:17:00.850552 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 13:17:00 crc kubenswrapper[4844]: E0126 13:17:00.850922 4844 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 13:17:00 crc kubenswrapper[4844]: E0126 13:17:00.851083 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.9:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8mw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-75f87779c-fqxxt_openstack(c1c80673-1b5a-43ca-9bf2-79762e902cd1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:17:00 crc kubenswrapper[4844]: E0126 13:17:00.852318 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-75f87779c-fqxxt" podUID="c1c80673-1b5a-43ca-9bf2-79762e902cd1" Jan 26 13:17:00 crc kubenswrapper[4844]: I0126 13:17:00.857801 4844 scope.go:117] "RemoveContainer" containerID="c178b7dd58fa440d5a2c1f87d63df18d4ee2a9cd1328a01cbdfc98db47f26831" Jan 26 13:17:00 crc kubenswrapper[4844]: E0126 13:17:00.894886 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 13:17:00 crc kubenswrapper[4844]: E0126 13:17:00.894941 4844 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 13:17:00 crc kubenswrapper[4844]: E0126 13:17:00.895051 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.9:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c7h56dh5cfh8bh54fhbbhf4h5b9hdch67fhd7h55fh55fh6ch9h548h54ch665h647h6h8fhd6h5dfh5cdh58bh577h66fh695h5fbh55h77h5fcq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfljv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6d9656c78f-bv48c_openstack(149ed01d-9763-4c6d-b17f-79b6e76b110f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:17:00 crc kubenswrapper[4844]: E0126 13:17:00.896402 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6d9656c78f-bv48c" podUID="149ed01d-9763-4c6d-b17f-79b6e76b110f" Jan 26 13:17:00 crc kubenswrapper[4844]: W0126 13:17:00.908704 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6696649d_b30c_4ef9_beda_3cec75d656b4.slice/crio-d0ac62ef562e298db97e712ca8a83c3c613ec2ecd15592292f2fcfe11e5713ed WatchSource:0}: Error finding container d0ac62ef562e298db97e712ca8a83c3c613ec2ecd15592292f2fcfe11e5713ed: Status 404 returned error can't find the container with id d0ac62ef562e298db97e712ca8a83c3c613ec2ecd15592292f2fcfe11e5713ed Jan 26 13:17:00 crc kubenswrapper[4844]: E0126 13:17:00.920698 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 13:17:00 crc kubenswrapper[4844]: E0126 13:17:00.920735 4844 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 13:17:00 crc kubenswrapper[4844]: E0126 13:17:00.920835 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.9:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jws49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-586ffd88f7-b82rf_openstack(19db5512-9121-4f15-90a3-0ce718ae58d8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:17:00 crc kubenswrapper[4844]: E0126 13:17:00.922080 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-586ffd88f7-b82rf" podUID="19db5512-9121-4f15-90a3-0ce718ae58d8" Jan 26 13:17:00 crc kubenswrapper[4844]: E0126 13:17:00.956126 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 13:17:00 crc kubenswrapper[4844]: E0126 13:17:00.956172 4844 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 13:17:00 crc kubenswrapper[4844]: E0126 13:17:00.956285 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.9:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbrcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6bccbb886f-mstqs_openstack(35ee2046-1d54-4ff1-a512-060c6c8ad0a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:17:00 crc kubenswrapper[4844]: E0126 13:17:00.958326 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6bccbb886f-mstqs" podUID="35ee2046-1d54-4ff1-a512-060c6c8ad0a3" Jan 26 13:17:01 crc kubenswrapper[4844]: E0126 13:17:01.115762 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 13:17:01 crc kubenswrapper[4844]: E0126 13:17:01.116140 4844 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 13:17:01 crc kubenswrapper[4844]: E0126 13:17:01.116295 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.9:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qqh8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-559648544f-cwdch_openstack(58298af3-1f5e-464f-9af7-70f300b48267): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:17:01 crc kubenswrapper[4844]: E0126 13:17:01.117891 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-559648544f-cwdch" podUID="58298af3-1f5e-464f-9af7-70f300b48267" Jan 26 13:17:01 crc kubenswrapper[4844]: I0126 13:17:01.340015 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:17:01 crc kubenswrapper[4844]: I0126 13:17:01.340046 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 13:17:01 crc kubenswrapper[4844]: W0126 13:17:01.340711 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e22ff40_cacd_405d_98f5_f603b17b4e4a.slice/crio-8918a36725639e3250d3e62f50ba6cbd1909958112a54cbf45174060b3b3cca2 WatchSource:0}: Error finding container 8918a36725639e3250d3e62f50ba6cbd1909958112a54cbf45174060b3b3cca2: Status 404 returned error can't find the container with id 8918a36725639e3250d3e62f50ba6cbd1909958112a54cbf45174060b3b3cca2 Jan 26 13:17:01 crc kubenswrapper[4844]: I0126 13:17:01.431254 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bq8zv"] Jan 26 13:17:01 crc kubenswrapper[4844]: I0126 13:17:01.453815 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 13:17:01 crc kubenswrapper[4844]: I0126 13:17:01.461651 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 13:17:01 crc kubenswrapper[4844]: W0126 13:17:01.467313 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88528049_6527_4f6d_b28f_9a7ca4d46cf8.slice/crio-731d3dbac1606921825c604d4df1600e99857b907e8bd41d74a970c3d2ab4fd8 WatchSource:0}: Error finding container 731d3dbac1606921825c604d4df1600e99857b907e8bd41d74a970c3d2ab4fd8: Status 404 returned error can't find the container with id 731d3dbac1606921825c604d4df1600e99857b907e8bd41d74a970c3d2ab4fd8 Jan 26 13:17:01 crc kubenswrapper[4844]: I0126 13:17:01.502916 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 13:17:01 crc kubenswrapper[4844]: W0126 13:17:01.521891 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod490e8905_58e4_44a6_a4a4_ea873a5eaa94.slice/crio-e1ce8075979100dc38767dfb1220e4d57310ded95b98340093f4ec9939f44671 WatchSource:0}: Error finding container e1ce8075979100dc38767dfb1220e4d57310ded95b98340093f4ec9939f44671: Status 404 returned error can't find the container with id e1ce8075979100dc38767dfb1220e4d57310ded95b98340093f4ec9939f44671 Jan 26 13:17:01 crc kubenswrapper[4844]: I0126 13:17:01.525353 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 13:17:01 crc kubenswrapper[4844]: I0126 13:17:01.844611 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"490e8905-58e4-44a6-a4a4-ea873a5eaa94","Type":"ContainerStarted","Data":"e1ce8075979100dc38767dfb1220e4d57310ded95b98340093f4ec9939f44671"} Jan 26 13:17:01 crc kubenswrapper[4844]: I0126 13:17:01.845737 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bq8zv" event={"ID":"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e","Type":"ContainerStarted","Data":"be58c42a0dbc1804f862fb72e44a26e9eda7c6aa2981c9fef4bf4882eb175f5f"} Jan 26 13:17:01 crc kubenswrapper[4844]: I0126 13:17:01.846766 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f2bd5019-39c7-4b78-8610-4a7db01f5a85","Type":"ContainerStarted","Data":"8d56d798e9f429c180466b7f876f663355101b2795f1cf97606d4bc5b8056800"} Jan 26 13:17:01 crc kubenswrapper[4844]: I0126 13:17:01.847764 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vnff8" event={"ID":"6696649d-b30c-4ef9-beda-3cec75d656b4","Type":"ContainerStarted","Data":"d0ac62ef562e298db97e712ca8a83c3c613ec2ecd15592292f2fcfe11e5713ed"} Jan 26 13:17:01 crc kubenswrapper[4844]: I0126 13:17:01.850608 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11","Type":"ContainerStarted","Data":"b446a782f03336c11f7449e8a89ce8fd5473e575977f9e7fc3903436d89c7f9b"} Jan 26 13:17:01 crc kubenswrapper[4844]: I0126 13:17:01.851807 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f80a52fc-df6a-4218-913e-2ee03174e341","Type":"ContainerStarted","Data":"66fb5d508a0ab56e33d63ee2efe53654851d65c167eb1b6264b9dc0e9a6a800e"} Jan 26 13:17:01 crc kubenswrapper[4844]: I0126 13:17:01.852925 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"7e22ff40-cacd-405d-98f5-f603b17b4e4a","Type":"ContainerStarted","Data":"8918a36725639e3250d3e62f50ba6cbd1909958112a54cbf45174060b3b3cca2"} Jan 26 13:17:01 crc kubenswrapper[4844]: I0126 13:17:01.854169 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"88528049-6527-4f6d-b28f-9a7ca4d46cf8","Type":"ContainerStarted","Data":"731d3dbac1606921825c604d4df1600e99857b907e8bd41d74a970c3d2ab4fd8"} Jan 26 13:17:01 crc kubenswrapper[4844]: E0126 13:17:01.855992 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-559648544f-cwdch" podUID="58298af3-1f5e-464f-9af7-70f300b48267" Jan 26 13:17:01 crc kubenswrapper[4844]: E0126 13:17:01.856063 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-6d9656c78f-bv48c" podUID="149ed01d-9763-4c6d-b17f-79b6e76b110f" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.101162 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 13:17:02 crc kubenswrapper[4844]: W0126 13:17:02.129696 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b89a5fa_2181_432a_a613_6bbeeb0f56bb.slice/crio-31255b2f29702c419bcc4a27178f22db5744aa480c28c936277b80d367af3ee4 WatchSource:0}: Error finding container 31255b2f29702c419bcc4a27178f22db5744aa480c28c936277b80d367af3ee4: Status 404 returned error can't find the container with id 31255b2f29702c419bcc4a27178f22db5744aa480c28c936277b80d367af3ee4 Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.263575 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586ffd88f7-b82rf" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.271484 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bccbb886f-mstqs" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.282188 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f87779c-fqxxt" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.461986 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8mw8\" (UniqueName: \"kubernetes.io/projected/c1c80673-1b5a-43ca-9bf2-79762e902cd1-kube-api-access-w8mw8\") pod \"c1c80673-1b5a-43ca-9bf2-79762e902cd1\" (UID: \"c1c80673-1b5a-43ca-9bf2-79762e902cd1\") " Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.462057 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jws49\" (UniqueName: \"kubernetes.io/projected/19db5512-9121-4f15-90a3-0ce718ae58d8-kube-api-access-jws49\") pod \"19db5512-9121-4f15-90a3-0ce718ae58d8\" (UID: \"19db5512-9121-4f15-90a3-0ce718ae58d8\") " Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.462143 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1c80673-1b5a-43ca-9bf2-79762e902cd1-config\") pod \"c1c80673-1b5a-43ca-9bf2-79762e902cd1\" (UID: \"c1c80673-1b5a-43ca-9bf2-79762e902cd1\") " Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.462167 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19db5512-9121-4f15-90a3-0ce718ae58d8-dns-svc\") pod \"19db5512-9121-4f15-90a3-0ce718ae58d8\" (UID: \"19db5512-9121-4f15-90a3-0ce718ae58d8\") " Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.462191 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbrcs\" (UniqueName: \"kubernetes.io/projected/35ee2046-1d54-4ff1-a512-060c6c8ad0a3-kube-api-access-sbrcs\") pod \"35ee2046-1d54-4ff1-a512-060c6c8ad0a3\" (UID: \"35ee2046-1d54-4ff1-a512-060c6c8ad0a3\") " Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.462251 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19db5512-9121-4f15-90a3-0ce718ae58d8-config\") pod \"19db5512-9121-4f15-90a3-0ce718ae58d8\" (UID: \"19db5512-9121-4f15-90a3-0ce718ae58d8\") " Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.462273 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35ee2046-1d54-4ff1-a512-060c6c8ad0a3-dns-svc\") pod \"35ee2046-1d54-4ff1-a512-060c6c8ad0a3\" (UID: \"35ee2046-1d54-4ff1-a512-060c6c8ad0a3\") " Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.462326 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35ee2046-1d54-4ff1-a512-060c6c8ad0a3-config\") pod \"35ee2046-1d54-4ff1-a512-060c6c8ad0a3\" (UID: \"35ee2046-1d54-4ff1-a512-060c6c8ad0a3\") " Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.462939 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19db5512-9121-4f15-90a3-0ce718ae58d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "19db5512-9121-4f15-90a3-0ce718ae58d8" (UID: "19db5512-9121-4f15-90a3-0ce718ae58d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.463168 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1c80673-1b5a-43ca-9bf2-79762e902cd1-config" (OuterVolumeSpecName: "config") pod "c1c80673-1b5a-43ca-9bf2-79762e902cd1" (UID: "c1c80673-1b5a-43ca-9bf2-79762e902cd1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.463416 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35ee2046-1d54-4ff1-a512-060c6c8ad0a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "35ee2046-1d54-4ff1-a512-060c6c8ad0a3" (UID: "35ee2046-1d54-4ff1-a512-060c6c8ad0a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.464125 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19db5512-9121-4f15-90a3-0ce718ae58d8-config" (OuterVolumeSpecName: "config") pod "19db5512-9121-4f15-90a3-0ce718ae58d8" (UID: "19db5512-9121-4f15-90a3-0ce718ae58d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.464568 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35ee2046-1d54-4ff1-a512-060c6c8ad0a3-config" (OuterVolumeSpecName: "config") pod "35ee2046-1d54-4ff1-a512-060c6c8ad0a3" (UID: "35ee2046-1d54-4ff1-a512-060c6c8ad0a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.468331 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35ee2046-1d54-4ff1-a512-060c6c8ad0a3-kube-api-access-sbrcs" (OuterVolumeSpecName: "kube-api-access-sbrcs") pod "35ee2046-1d54-4ff1-a512-060c6c8ad0a3" (UID: "35ee2046-1d54-4ff1-a512-060c6c8ad0a3"). InnerVolumeSpecName "kube-api-access-sbrcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.468427 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19db5512-9121-4f15-90a3-0ce718ae58d8-kube-api-access-jws49" (OuterVolumeSpecName: "kube-api-access-jws49") pod "19db5512-9121-4f15-90a3-0ce718ae58d8" (UID: "19db5512-9121-4f15-90a3-0ce718ae58d8"). InnerVolumeSpecName "kube-api-access-jws49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.468414 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1c80673-1b5a-43ca-9bf2-79762e902cd1-kube-api-access-w8mw8" (OuterVolumeSpecName: "kube-api-access-w8mw8") pod "c1c80673-1b5a-43ca-9bf2-79762e902cd1" (UID: "c1c80673-1b5a-43ca-9bf2-79762e902cd1"). InnerVolumeSpecName "kube-api-access-w8mw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.563643 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jws49\" (UniqueName: \"kubernetes.io/projected/19db5512-9121-4f15-90a3-0ce718ae58d8-kube-api-access-jws49\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.563676 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1c80673-1b5a-43ca-9bf2-79762e902cd1-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.563686 4844 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19db5512-9121-4f15-90a3-0ce718ae58d8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.563696 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbrcs\" (UniqueName: \"kubernetes.io/projected/35ee2046-1d54-4ff1-a512-060c6c8ad0a3-kube-api-access-sbrcs\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.563704 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19db5512-9121-4f15-90a3-0ce718ae58d8-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.563712 4844 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35ee2046-1d54-4ff1-a512-060c6c8ad0a3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.563721 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35ee2046-1d54-4ff1-a512-060c6c8ad0a3-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.563729 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8mw8\" (UniqueName: \"kubernetes.io/projected/c1c80673-1b5a-43ca-9bf2-79762e902cd1-kube-api-access-w8mw8\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.861549 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586ffd88f7-b82rf" event={"ID":"19db5512-9121-4f15-90a3-0ce718ae58d8","Type":"ContainerDied","Data":"b8bf793ee2f4ebd01722d4337549c374b51886fdcbe33117c24eac3faf38beed"} Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.861638 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586ffd88f7-b82rf" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.866703 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bccbb886f-mstqs" event={"ID":"35ee2046-1d54-4ff1-a512-060c6c8ad0a3","Type":"ContainerDied","Data":"6905b2281da11493ad02c2ca2b173bc9dcd71916fd676f45cb8f312ac28c91e5"} Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.866749 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bccbb886f-mstqs" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.867578 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f87779c-fqxxt" event={"ID":"c1c80673-1b5a-43ca-9bf2-79762e902cd1","Type":"ContainerDied","Data":"322587710b612c4317cf0196ab7020ef2a5dec4137ea76001a95ad75fff634ef"} Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.867667 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f87779c-fqxxt" Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.869461 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6b89a5fa-2181-432a-a613-6bbeeb0f56bb","Type":"ContainerStarted","Data":"31255b2f29702c419bcc4a27178f22db5744aa480c28c936277b80d367af3ee4"} Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.922221 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586ffd88f7-b82rf"] Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.939522 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586ffd88f7-b82rf"] Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.953761 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75f87779c-fqxxt"] Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.964225 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75f87779c-fqxxt"] Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.973493 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bccbb886f-mstqs"] Jan 26 13:17:02 crc kubenswrapper[4844]: I0126 13:17:02.978014 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bccbb886f-mstqs"] Jan 26 13:17:03 crc kubenswrapper[4844]: I0126 13:17:03.345672 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19db5512-9121-4f15-90a3-0ce718ae58d8" path="/var/lib/kubelet/pods/19db5512-9121-4f15-90a3-0ce718ae58d8/volumes" Jan 26 13:17:03 crc kubenswrapper[4844]: I0126 13:17:03.346359 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35ee2046-1d54-4ff1-a512-060c6c8ad0a3" path="/var/lib/kubelet/pods/35ee2046-1d54-4ff1-a512-060c6c8ad0a3/volumes" Jan 26 13:17:03 crc kubenswrapper[4844]: I0126 13:17:03.348170 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1c80673-1b5a-43ca-9bf2-79762e902cd1" path="/var/lib/kubelet/pods/c1c80673-1b5a-43ca-9bf2-79762e902cd1/volumes" Jan 26 13:17:06 crc kubenswrapper[4844]: I0126 13:17:06.365004 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:17:06 crc kubenswrapper[4844]: I0126 13:17:06.365443 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:17:07 crc kubenswrapper[4844]: I0126 13:17:07.909750 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f2bd5019-39c7-4b78-8610-4a7db01f5a85","Type":"ContainerStarted","Data":"ef9a55232aa4897370455cbd1c55585dfdd9de6fc1046607b3053baf0e0f1d9f"} Jan 26 13:17:07 crc kubenswrapper[4844]: I0126 13:17:07.910081 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 26 13:17:07 crc kubenswrapper[4844]: I0126 13:17:07.912003 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vnff8" event={"ID":"6696649d-b30c-4ef9-beda-3cec75d656b4","Type":"ContainerStarted","Data":"5f6c78dc490b879fc068fe1e9dd9035b63464472d8dbebaedb37fefb41d2ec01"} Jan 26 13:17:07 crc kubenswrapper[4844]: I0126 13:17:07.912405 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-vnff8" Jan 26 13:17:07 crc kubenswrapper[4844]: I0126 13:17:07.913454 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"490e8905-58e4-44a6-a4a4-ea873a5eaa94","Type":"ContainerStarted","Data":"1b4cce5f2912dab64bfb47aa8f1336a45d201785a9b7cbf2adedf8295349dca6"} Jan 26 13:17:07 crc kubenswrapper[4844]: I0126 13:17:07.914835 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f80a52fc-df6a-4218-913e-2ee03174e341","Type":"ContainerStarted","Data":"ad30b2f35c00274a9cb5332926d9a620288947686eeaf365aa01f54ebc077105"} Jan 26 13:17:07 crc kubenswrapper[4844]: I0126 13:17:07.915959 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"7e22ff40-cacd-405d-98f5-f603b17b4e4a","Type":"ContainerStarted","Data":"43f7ef1eb64021b35d8a066e444b1bb3aed3c77da5431e8dfe173c305336260c"} Jan 26 13:17:07 crc kubenswrapper[4844]: I0126 13:17:07.917045 4844 generic.go:334] "Generic (PLEG): container finished" podID="f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e" containerID="547054bb9e23c7b3284936a97d2b6110f97bec8826523a37b42563133a8caf2b" exitCode=0 Jan 26 13:17:07 crc kubenswrapper[4844]: I0126 13:17:07.917070 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bq8zv" event={"ID":"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e","Type":"ContainerDied","Data":"547054bb9e23c7b3284936a97d2b6110f97bec8826523a37b42563133a8caf2b"} Jan 26 13:17:07 crc kubenswrapper[4844]: I0126 13:17:07.935179 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=31.096342779 podStartE2EDuration="35.935162279s" podCreationTimestamp="2026-01-26 13:16:32 +0000 UTC" firstStartedPulling="2026-01-26 13:17:01.465453341 +0000 UTC m=+1998.398820953" lastFinishedPulling="2026-01-26 13:17:06.304272841 +0000 UTC m=+2003.237640453" observedRunningTime="2026-01-26 13:17:07.931552433 +0000 UTC m=+2004.864920065" watchObservedRunningTime="2026-01-26 13:17:07.935162279 +0000 UTC m=+2004.868529891" Jan 26 13:17:08 crc kubenswrapper[4844]: I0126 13:17:08.013640 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-vnff8" podStartSLOduration=23.623409001 podStartE2EDuration="29.013614641s" podCreationTimestamp="2026-01-26 13:16:39 +0000 UTC" firstStartedPulling="2026-01-26 13:17:00.914546234 +0000 UTC m=+1997.847913846" lastFinishedPulling="2026-01-26 13:17:06.304751874 +0000 UTC m=+2003.238119486" observedRunningTime="2026-01-26 13:17:08.00646466 +0000 UTC m=+2004.939832292" watchObservedRunningTime="2026-01-26 13:17:08.013614641 +0000 UTC m=+2004.946982253" Jan 26 13:17:08 crc kubenswrapper[4844]: I0126 13:17:08.928849 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bq8zv" event={"ID":"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e","Type":"ContainerStarted","Data":"ba61a9aeadce65525844673d60dcafa5d9848cc16b6c971b5ce74fbe0dc340c8"} Jan 26 13:17:08 crc kubenswrapper[4844]: I0126 13:17:08.930459 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"88528049-6527-4f6d-b28f-9a7ca4d46cf8","Type":"ContainerStarted","Data":"3526d27446b4d5bda5b69b0697e58a4e33ba2861c8a717975bb8d0d5d52e0b77"} Jan 26 13:17:08 crc kubenswrapper[4844]: I0126 13:17:08.930619 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 13:17:08 crc kubenswrapper[4844]: I0126 13:17:08.933216 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6b89a5fa-2181-432a-a613-6bbeeb0f56bb","Type":"ContainerStarted","Data":"8e19db97a3b3b4a7aa38074e94b570a2224174e5f209209d4bdc10a7fa4e7c6d"} Jan 26 13:17:08 crc kubenswrapper[4844]: I0126 13:17:08.948973 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=28.316746534 podStartE2EDuration="34.948949491s" podCreationTimestamp="2026-01-26 13:16:34 +0000 UTC" firstStartedPulling="2026-01-26 13:17:01.485055831 +0000 UTC m=+1998.418423443" lastFinishedPulling="2026-01-26 13:17:08.117258778 +0000 UTC m=+2005.050626400" observedRunningTime="2026-01-26 13:17:08.943291926 +0000 UTC m=+2005.876659548" watchObservedRunningTime="2026-01-26 13:17:08.948949491 +0000 UTC m=+2005.882317103" Jan 26 13:17:09 crc kubenswrapper[4844]: I0126 13:17:09.942669 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11","Type":"ContainerStarted","Data":"812b8a02174bdb9d9317991bd7d045861aa6c7f61eafb34caa41e709bbbe6d17"} Jan 26 13:17:09 crc kubenswrapper[4844]: I0126 13:17:09.948026 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bq8zv" event={"ID":"f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e","Type":"ContainerStarted","Data":"be2ca79453fa4e088a90ba7ea10cbca7c41d34ed07ebfadd6cb7933c858fb4d0"} Jan 26 13:17:09 crc kubenswrapper[4844]: I0126 13:17:09.948086 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:17:09 crc kubenswrapper[4844]: I0126 13:17:09.948122 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:17:10 crc kubenswrapper[4844]: I0126 13:17:10.002524 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-bq8zv" podStartSLOduration=26.171132036 podStartE2EDuration="31.002505338s" podCreationTimestamp="2026-01-26 13:16:39 +0000 UTC" firstStartedPulling="2026-01-26 13:17:01.472416258 +0000 UTC m=+1998.405783870" lastFinishedPulling="2026-01-26 13:17:06.30378956 +0000 UTC m=+2003.237157172" observedRunningTime="2026-01-26 13:17:09.998100103 +0000 UTC m=+2006.931467715" watchObservedRunningTime="2026-01-26 13:17:10.002505338 +0000 UTC m=+2006.935872970" Jan 26 13:17:11 crc kubenswrapper[4844]: I0126 13:17:11.968911 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6b89a5fa-2181-432a-a613-6bbeeb0f56bb","Type":"ContainerStarted","Data":"cd92933a6d7998abb2f7391ca6305c593a4a3cde7e42e1556e9dbf7c430faa54"} Jan 26 13:17:11 crc kubenswrapper[4844]: I0126 13:17:11.973106 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"490e8905-58e4-44a6-a4a4-ea873a5eaa94","Type":"ContainerStarted","Data":"4e18a217994a49503934beb67dc724230a060205655ff4dbe1b65413c4f75e31"} Jan 26 13:17:11 crc kubenswrapper[4844]: I0126 13:17:11.975217 4844 generic.go:334] "Generic (PLEG): container finished" podID="f80a52fc-df6a-4218-913e-2ee03174e341" containerID="ad30b2f35c00274a9cb5332926d9a620288947686eeaf365aa01f54ebc077105" exitCode=0 Jan 26 13:17:11 crc kubenswrapper[4844]: I0126 13:17:11.975301 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f80a52fc-df6a-4218-913e-2ee03174e341","Type":"ContainerDied","Data":"ad30b2f35c00274a9cb5332926d9a620288947686eeaf365aa01f54ebc077105"} Jan 26 13:17:11 crc kubenswrapper[4844]: I0126 13:17:11.976917 4844 generic.go:334] "Generic (PLEG): container finished" podID="7e22ff40-cacd-405d-98f5-f603b17b4e4a" containerID="43f7ef1eb64021b35d8a066e444b1bb3aed3c77da5431e8dfe173c305336260c" exitCode=0 Jan 26 13:17:11 crc kubenswrapper[4844]: I0126 13:17:11.977011 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"7e22ff40-cacd-405d-98f5-f603b17b4e4a","Type":"ContainerDied","Data":"43f7ef1eb64021b35d8a066e444b1bb3aed3c77da5431e8dfe173c305336260c"} Jan 26 13:17:12 crc kubenswrapper[4844]: I0126 13:17:12.019740 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=22.324684028 podStartE2EDuration="31.019710395s" podCreationTimestamp="2026-01-26 13:16:41 +0000 UTC" firstStartedPulling="2026-01-26 13:17:02.147112585 +0000 UTC m=+1999.080480197" lastFinishedPulling="2026-01-26 13:17:10.842138952 +0000 UTC m=+2007.775506564" observedRunningTime="2026-01-26 13:17:12.005504724 +0000 UTC m=+2008.938872416" watchObservedRunningTime="2026-01-26 13:17:12.019710395 +0000 UTC m=+2008.953078047" Jan 26 13:17:12 crc kubenswrapper[4844]: I0126 13:17:12.041150 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.782315115 podStartE2EDuration="32.041120579s" podCreationTimestamp="2026-01-26 13:16:40 +0000 UTC" firstStartedPulling="2026-01-26 13:17:01.524299642 +0000 UTC m=+1998.457667254" lastFinishedPulling="2026-01-26 13:17:10.783105106 +0000 UTC m=+2007.716472718" observedRunningTime="2026-01-26 13:17:12.037380719 +0000 UTC m=+2008.970748371" watchObservedRunningTime="2026-01-26 13:17:12.041120579 +0000 UTC m=+2008.974488231" Jan 26 13:17:12 crc kubenswrapper[4844]: I0126 13:17:12.297823 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 26 13:17:12 crc kubenswrapper[4844]: I0126 13:17:12.298637 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 26 13:17:12 crc kubenswrapper[4844]: I0126 13:17:12.353507 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 26 13:17:12 crc kubenswrapper[4844]: I0126 13:17:12.442548 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 26 13:17:12 crc kubenswrapper[4844]: I0126 13:17:12.442606 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 26 13:17:12 crc kubenswrapper[4844]: I0126 13:17:12.500524 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 26 13:17:12 crc kubenswrapper[4844]: I0126 13:17:12.986740 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e8e36a62-9367-4c94-9aff-de8e6166af27","Type":"ContainerStarted","Data":"8037333977f59346e11bb0d4d8078b561374ca9115b317429eb3ea0e2a3fc400"} Jan 26 13:17:12 crc kubenswrapper[4844]: I0126 13:17:12.989278 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e48f1161-14d0-42c1-b6ac-bdb8bce26985","Type":"ContainerStarted","Data":"438ed061427135c543fb34c1f5a9679a2e6315a4f3935f61296d309523cd31e0"} Jan 26 13:17:12 crc kubenswrapper[4844]: I0126 13:17:12.991529 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"185637e1-efed-452c-ba52-7688909bad2c","Type":"ContainerStarted","Data":"b9ba7092d058ca611541e96848fae9ae6e472b992eb4b97bdb6a21e93a6ff189"} Jan 26 13:17:12 crc kubenswrapper[4844]: I0126 13:17:12.994035 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f80a52fc-df6a-4218-913e-2ee03174e341","Type":"ContainerStarted","Data":"50b7abd0fbb4bd4fb37dda745434aca0dc17b82dd8a36c54d6da9c91d8150c0f"} Jan 26 13:17:12 crc kubenswrapper[4844]: I0126 13:17:12.996365 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"7e22ff40-cacd-405d-98f5-f603b17b4e4a","Type":"ContainerStarted","Data":"05ea29d864e4ec2d97a7ee1ef84368f07a3ad9be7eb12b23fba181b731467e78"} Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.022828 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.052674 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=38.091055131 podStartE2EDuration="43.052652297s" podCreationTimestamp="2026-01-26 13:16:30 +0000 UTC" firstStartedPulling="2026-01-26 13:17:01.342891271 +0000 UTC m=+1998.276258883" lastFinishedPulling="2026-01-26 13:17:06.304488437 +0000 UTC m=+2003.237856049" observedRunningTime="2026-01-26 13:17:13.042911923 +0000 UTC m=+2009.976279545" watchObservedRunningTime="2026-01-26 13:17:13.052652297 +0000 UTC m=+2009.986019909" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.073631 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.094279 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=37.264515054 podStartE2EDuration="42.094251756s" podCreationTimestamp="2026-01-26 13:16:31 +0000 UTC" firstStartedPulling="2026-01-26 13:17:01.474051848 +0000 UTC m=+1998.407419460" lastFinishedPulling="2026-01-26 13:17:06.30378855 +0000 UTC m=+2003.237156162" observedRunningTime="2026-01-26 13:17:13.080005634 +0000 UTC m=+2010.013373316" watchObservedRunningTime="2026-01-26 13:17:13.094251756 +0000 UTC m=+2010.027619408" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.463771 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-559648544f-cwdch"] Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.482204 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-wnqpc"] Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.483421 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.491128 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.495264 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wnqpc"] Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.513268 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5d5c9f8f-9m69m"] Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.514556 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.519533 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.521704 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5d5c9f8f-9m69m"] Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.564177 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77361a0b-a3eb-49da-971b-705eca5894eb-config\") pod \"ovn-controller-metrics-wnqpc\" (UID: \"77361a0b-a3eb-49da-971b-705eca5894eb\") " pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.564251 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/77361a0b-a3eb-49da-971b-705eca5894eb-ovs-rundir\") pod \"ovn-controller-metrics-wnqpc\" (UID: \"77361a0b-a3eb-49da-971b-705eca5894eb\") " pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.564280 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/77361a0b-a3eb-49da-971b-705eca5894eb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wnqpc\" (UID: \"77361a0b-a3eb-49da-971b-705eca5894eb\") " pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.564441 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77361a0b-a3eb-49da-971b-705eca5894eb-combined-ca-bundle\") pod \"ovn-controller-metrics-wnqpc\" (UID: \"77361a0b-a3eb-49da-971b-705eca5894eb\") " pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.564545 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/77361a0b-a3eb-49da-971b-705eca5894eb-ovn-rundir\") pod \"ovn-controller-metrics-wnqpc\" (UID: \"77361a0b-a3eb-49da-971b-705eca5894eb\") " pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.564577 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw67t\" (UniqueName: \"kubernetes.io/projected/77361a0b-a3eb-49da-971b-705eca5894eb-kube-api-access-hw67t\") pod \"ovn-controller-metrics-wnqpc\" (UID: \"77361a0b-a3eb-49da-971b-705eca5894eb\") " pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.666323 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77361a0b-a3eb-49da-971b-705eca5894eb-combined-ca-bundle\") pod \"ovn-controller-metrics-wnqpc\" (UID: \"77361a0b-a3eb-49da-971b-705eca5894eb\") " pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.666401 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73dd3353-ef91-44cb-8772-fc2c7426c367-config\") pod \"dnsmasq-dns-5c5d5c9f8f-9m69m\" (UID: \"73dd3353-ef91-44cb-8772-fc2c7426c367\") " pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.666455 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73dd3353-ef91-44cb-8772-fc2c7426c367-dns-svc\") pod \"dnsmasq-dns-5c5d5c9f8f-9m69m\" (UID: \"73dd3353-ef91-44cb-8772-fc2c7426c367\") " pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.666497 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/77361a0b-a3eb-49da-971b-705eca5894eb-ovn-rundir\") pod \"ovn-controller-metrics-wnqpc\" (UID: \"77361a0b-a3eb-49da-971b-705eca5894eb\") " pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.666525 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw67t\" (UniqueName: \"kubernetes.io/projected/77361a0b-a3eb-49da-971b-705eca5894eb-kube-api-access-hw67t\") pod \"ovn-controller-metrics-wnqpc\" (UID: \"77361a0b-a3eb-49da-971b-705eca5894eb\") " pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.666559 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77361a0b-a3eb-49da-971b-705eca5894eb-config\") pod \"ovn-controller-metrics-wnqpc\" (UID: \"77361a0b-a3eb-49da-971b-705eca5894eb\") " pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.666580 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73dd3353-ef91-44cb-8772-fc2c7426c367-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5d5c9f8f-9m69m\" (UID: \"73dd3353-ef91-44cb-8772-fc2c7426c367\") " pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.666622 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/77361a0b-a3eb-49da-971b-705eca5894eb-ovs-rundir\") pod \"ovn-controller-metrics-wnqpc\" (UID: \"77361a0b-a3eb-49da-971b-705eca5894eb\") " pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.666644 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/77361a0b-a3eb-49da-971b-705eca5894eb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wnqpc\" (UID: \"77361a0b-a3eb-49da-971b-705eca5894eb\") " pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.666696 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr98d\" (UniqueName: \"kubernetes.io/projected/73dd3353-ef91-44cb-8772-fc2c7426c367-kube-api-access-jr98d\") pod \"dnsmasq-dns-5c5d5c9f8f-9m69m\" (UID: \"73dd3353-ef91-44cb-8772-fc2c7426c367\") " pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.667046 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/77361a0b-a3eb-49da-971b-705eca5894eb-ovs-rundir\") pod \"ovn-controller-metrics-wnqpc\" (UID: \"77361a0b-a3eb-49da-971b-705eca5894eb\") " pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.667046 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/77361a0b-a3eb-49da-971b-705eca5894eb-ovn-rundir\") pod \"ovn-controller-metrics-wnqpc\" (UID: \"77361a0b-a3eb-49da-971b-705eca5894eb\") " pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.668135 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77361a0b-a3eb-49da-971b-705eca5894eb-config\") pod \"ovn-controller-metrics-wnqpc\" (UID: \"77361a0b-a3eb-49da-971b-705eca5894eb\") " pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.672194 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77361a0b-a3eb-49da-971b-705eca5894eb-combined-ca-bundle\") pod \"ovn-controller-metrics-wnqpc\" (UID: \"77361a0b-a3eb-49da-971b-705eca5894eb\") " pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.680144 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/77361a0b-a3eb-49da-971b-705eca5894eb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wnqpc\" (UID: \"77361a0b-a3eb-49da-971b-705eca5894eb\") " pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.686816 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw67t\" (UniqueName: \"kubernetes.io/projected/77361a0b-a3eb-49da-971b-705eca5894eb-kube-api-access-hw67t\") pod \"ovn-controller-metrics-wnqpc\" (UID: \"77361a0b-a3eb-49da-971b-705eca5894eb\") " pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.768694 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr98d\" (UniqueName: \"kubernetes.io/projected/73dd3353-ef91-44cb-8772-fc2c7426c367-kube-api-access-jr98d\") pod \"dnsmasq-dns-5c5d5c9f8f-9m69m\" (UID: \"73dd3353-ef91-44cb-8772-fc2c7426c367\") " pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.768766 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73dd3353-ef91-44cb-8772-fc2c7426c367-config\") pod \"dnsmasq-dns-5c5d5c9f8f-9m69m\" (UID: \"73dd3353-ef91-44cb-8772-fc2c7426c367\") " pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.768811 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73dd3353-ef91-44cb-8772-fc2c7426c367-dns-svc\") pod \"dnsmasq-dns-5c5d5c9f8f-9m69m\" (UID: \"73dd3353-ef91-44cb-8772-fc2c7426c367\") " pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.768879 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73dd3353-ef91-44cb-8772-fc2c7426c367-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5d5c9f8f-9m69m\" (UID: \"73dd3353-ef91-44cb-8772-fc2c7426c367\") " pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.770512 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73dd3353-ef91-44cb-8772-fc2c7426c367-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5d5c9f8f-9m69m\" (UID: \"73dd3353-ef91-44cb-8772-fc2c7426c367\") " pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.774102 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73dd3353-ef91-44cb-8772-fc2c7426c367-dns-svc\") pod \"dnsmasq-dns-5c5d5c9f8f-9m69m\" (UID: \"73dd3353-ef91-44cb-8772-fc2c7426c367\") " pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.774118 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73dd3353-ef91-44cb-8772-fc2c7426c367-config\") pod \"dnsmasq-dns-5c5d5c9f8f-9m69m\" (UID: \"73dd3353-ef91-44cb-8772-fc2c7426c367\") " pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.790478 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr98d\" (UniqueName: \"kubernetes.io/projected/73dd3353-ef91-44cb-8772-fc2c7426c367-kube-api-access-jr98d\") pod \"dnsmasq-dns-5c5d5c9f8f-9m69m\" (UID: \"73dd3353-ef91-44cb-8772-fc2c7426c367\") " pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.803956 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559648544f-cwdch" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.834801 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d9656c78f-bv48c"] Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.872071 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54585cbbc-jk8bg"] Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.873317 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.875770 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.878387 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wnqpc" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.883550 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54585cbbc-jk8bg"] Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.894021 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.972209 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58298af3-1f5e-464f-9af7-70f300b48267-dns-svc\") pod \"58298af3-1f5e-464f-9af7-70f300b48267\" (UID: \"58298af3-1f5e-464f-9af7-70f300b48267\") " Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.972284 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58298af3-1f5e-464f-9af7-70f300b48267-config\") pod \"58298af3-1f5e-464f-9af7-70f300b48267\" (UID: \"58298af3-1f5e-464f-9af7-70f300b48267\") " Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.972397 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqh8z\" (UniqueName: \"kubernetes.io/projected/58298af3-1f5e-464f-9af7-70f300b48267-kube-api-access-qqh8z\") pod \"58298af3-1f5e-464f-9af7-70f300b48267\" (UID: \"58298af3-1f5e-464f-9af7-70f300b48267\") " Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.972641 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-ovsdbserver-nb\") pod \"dnsmasq-dns-54585cbbc-jk8bg\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.972681 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-ovsdbserver-sb\") pod \"dnsmasq-dns-54585cbbc-jk8bg\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.972715 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-dns-svc\") pod \"dnsmasq-dns-54585cbbc-jk8bg\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.972768 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sffh7\" (UniqueName: \"kubernetes.io/projected/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-kube-api-access-sffh7\") pod \"dnsmasq-dns-54585cbbc-jk8bg\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.972805 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-config\") pod \"dnsmasq-dns-54585cbbc-jk8bg\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.973427 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58298af3-1f5e-464f-9af7-70f300b48267-config" (OuterVolumeSpecName: "config") pod "58298af3-1f5e-464f-9af7-70f300b48267" (UID: "58298af3-1f5e-464f-9af7-70f300b48267"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.974069 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58298af3-1f5e-464f-9af7-70f300b48267-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "58298af3-1f5e-464f-9af7-70f300b48267" (UID: "58298af3-1f5e-464f-9af7-70f300b48267"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:13 crc kubenswrapper[4844]: I0126 13:17:13.977459 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58298af3-1f5e-464f-9af7-70f300b48267-kube-api-access-qqh8z" (OuterVolumeSpecName: "kube-api-access-qqh8z") pod "58298af3-1f5e-464f-9af7-70f300b48267" (UID: "58298af3-1f5e-464f-9af7-70f300b48267"). InnerVolumeSpecName "kube-api-access-qqh8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.012909 4844 generic.go:334] "Generic (PLEG): container finished" podID="149ed01d-9763-4c6d-b17f-79b6e76b110f" containerID="a2378850496400f63119754f2d2d611d720f59e11b1c5c21ae59c83f0f8ef551" exitCode=0 Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.013005 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d9656c78f-bv48c" event={"ID":"149ed01d-9763-4c6d-b17f-79b6e76b110f","Type":"ContainerDied","Data":"a2378850496400f63119754f2d2d611d720f59e11b1c5c21ae59c83f0f8ef551"} Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.023131 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-559648544f-cwdch" event={"ID":"58298af3-1f5e-464f-9af7-70f300b48267","Type":"ContainerDied","Data":"b311d993e6e5078b9daf68ce04927a63a13f727f71025337a5334ceb86a88d8d"} Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.028877 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559648544f-cwdch" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.075948 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-config\") pod \"dnsmasq-dns-54585cbbc-jk8bg\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.076029 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-ovsdbserver-nb\") pod \"dnsmasq-dns-54585cbbc-jk8bg\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.076065 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-ovsdbserver-sb\") pod \"dnsmasq-dns-54585cbbc-jk8bg\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.076103 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-dns-svc\") pod \"dnsmasq-dns-54585cbbc-jk8bg\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.076162 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sffh7\" (UniqueName: \"kubernetes.io/projected/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-kube-api-access-sffh7\") pod \"dnsmasq-dns-54585cbbc-jk8bg\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.076227 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqh8z\" (UniqueName: \"kubernetes.io/projected/58298af3-1f5e-464f-9af7-70f300b48267-kube-api-access-qqh8z\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.076242 4844 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58298af3-1f5e-464f-9af7-70f300b48267-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.076252 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58298af3-1f5e-464f-9af7-70f300b48267-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.077305 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-config\") pod \"dnsmasq-dns-54585cbbc-jk8bg\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.077396 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-ovsdbserver-sb\") pod \"dnsmasq-dns-54585cbbc-jk8bg\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.077892 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-dns-svc\") pod \"dnsmasq-dns-54585cbbc-jk8bg\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.078024 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-ovsdbserver-nb\") pod \"dnsmasq-dns-54585cbbc-jk8bg\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.098357 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sffh7\" (UniqueName: \"kubernetes.io/projected/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-kube-api-access-sffh7\") pod \"dnsmasq-dns-54585cbbc-jk8bg\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.134255 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.148172 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-559648544f-cwdch"] Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.168430 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-559648544f-cwdch"] Jan 26 13:17:14 crc kubenswrapper[4844]: W0126 13:17:14.183938 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77361a0b_a3eb_49da_971b_705eca5894eb.slice/crio-38a64b9eb4fc0ec275e280a254818667bb3160f4c752ed36438c3b87f4552d5d WatchSource:0}: Error finding container 38a64b9eb4fc0ec275e280a254818667bb3160f4c752ed36438c3b87f4552d5d: Status 404 returned error can't find the container with id 38a64b9eb4fc0ec275e280a254818667bb3160f4c752ed36438c3b87f4552d5d Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.184901 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wnqpc"] Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.190405 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.477397 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.479718 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.482644 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 26 13:17:14 crc kubenswrapper[4844]: W0126 13:17:14.482864 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73dd3353_ef91_44cb_8772_fc2c7426c367.slice/crio-167cbd9fe279413f2950f81ebdf49e0501179f58bd8dc8e7482e331e9391f5bd WatchSource:0}: Error finding container 167cbd9fe279413f2950f81ebdf49e0501179f58bd8dc8e7482e331e9391f5bd: Status 404 returned error can't find the container with id 167cbd9fe279413f2950f81ebdf49e0501179f58bd8dc8e7482e331e9391f5bd Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.482907 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.482991 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-8p22t" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.485574 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.486671 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5d5c9f8f-9m69m"] Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.496996 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.500036 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d9656c78f-bv48c" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.547421 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54585cbbc-jk8bg"] Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.584473 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/149ed01d-9763-4c6d-b17f-79b6e76b110f-dns-svc\") pod \"149ed01d-9763-4c6d-b17f-79b6e76b110f\" (UID: \"149ed01d-9763-4c6d-b17f-79b6e76b110f\") " Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.584533 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfljv\" (UniqueName: \"kubernetes.io/projected/149ed01d-9763-4c6d-b17f-79b6e76b110f-kube-api-access-vfljv\") pod \"149ed01d-9763-4c6d-b17f-79b6e76b110f\" (UID: \"149ed01d-9763-4c6d-b17f-79b6e76b110f\") " Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.584610 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/149ed01d-9763-4c6d-b17f-79b6e76b110f-config\") pod \"149ed01d-9763-4c6d-b17f-79b6e76b110f\" (UID: \"149ed01d-9763-4c6d-b17f-79b6e76b110f\") " Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.584785 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.584819 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ck5h\" (UniqueName: \"kubernetes.io/projected/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-kube-api-access-7ck5h\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.584860 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.584898 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.584927 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-config\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.584942 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.584990 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-scripts\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.588053 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149ed01d-9763-4c6d-b17f-79b6e76b110f-kube-api-access-vfljv" (OuterVolumeSpecName: "kube-api-access-vfljv") pod "149ed01d-9763-4c6d-b17f-79b6e76b110f" (UID: "149ed01d-9763-4c6d-b17f-79b6e76b110f"). InnerVolumeSpecName "kube-api-access-vfljv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.609221 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/149ed01d-9763-4c6d-b17f-79b6e76b110f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "149ed01d-9763-4c6d-b17f-79b6e76b110f" (UID: "149ed01d-9763-4c6d-b17f-79b6e76b110f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.610575 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/149ed01d-9763-4c6d-b17f-79b6e76b110f-config" (OuterVolumeSpecName: "config") pod "149ed01d-9763-4c6d-b17f-79b6e76b110f" (UID: "149ed01d-9763-4c6d-b17f-79b6e76b110f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.686400 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.686778 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ck5h\" (UniqueName: \"kubernetes.io/projected/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-kube-api-access-7ck5h\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.686958 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.687537 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.688132 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-config\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.688889 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.689353 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-scripts\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.689556 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfljv\" (UniqueName: \"kubernetes.io/projected/149ed01d-9763-4c6d-b17f-79b6e76b110f-kube-api-access-vfljv\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.689665 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/149ed01d-9763-4c6d-b17f-79b6e76b110f-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.689761 4844 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/149ed01d-9763-4c6d-b17f-79b6e76b110f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.689770 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.688765 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-config\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.688066 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.690422 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-scripts\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.691395 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.692134 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:14 crc kubenswrapper[4844]: I0126 13:17:14.711996 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ck5h\" (UniqueName: \"kubernetes.io/projected/a0913fcd-1ca6-46f8-80a8-0c2ced36fea9-kube-api-access-7ck5h\") pod \"ovn-northd-0\" (UID: \"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9\") " pod="openstack/ovn-northd-0" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.002495 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.031507 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d9656c78f-bv48c" event={"ID":"149ed01d-9763-4c6d-b17f-79b6e76b110f","Type":"ContainerDied","Data":"bb56dcb2c38f05a6ae8b3a58557b93f500a8acbee264f82b454c1c02c885054d"} Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.031552 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d9656c78f-bv48c" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.031567 4844 scope.go:117] "RemoveContainer" containerID="a2378850496400f63119754f2d2d611d720f59e11b1c5c21ae59c83f0f8ef551" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.033585 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" event={"ID":"02fbab67-06d4-40b7-a1d0-9bdb9cf91def","Type":"ContainerStarted","Data":"7cc425ca67b3d70d162e8c734005e35e8efc37350e821cdf6a6c3b43326764ba"} Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.033644 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" event={"ID":"02fbab67-06d4-40b7-a1d0-9bdb9cf91def","Type":"ContainerStarted","Data":"59b67f97fc7af67788f5a41574412f5a2f817bac335c7623c829f19b14b7a112"} Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.035216 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" event={"ID":"73dd3353-ef91-44cb-8772-fc2c7426c367","Type":"ContainerStarted","Data":"47fa05eec2fb30998a533567939e95af8f0e0bde972e7b544d9b796c3ecac43d"} Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.035237 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" event={"ID":"73dd3353-ef91-44cb-8772-fc2c7426c367","Type":"ContainerStarted","Data":"167cbd9fe279413f2950f81ebdf49e0501179f58bd8dc8e7482e331e9391f5bd"} Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.036970 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wnqpc" event={"ID":"77361a0b-a3eb-49da-971b-705eca5894eb","Type":"ContainerStarted","Data":"91016f607d5cac83bcb02210b72a23fe5e2bcb002b6b33db1e94697cadc71103"} Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.036995 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wnqpc" event={"ID":"77361a0b-a3eb-49da-971b-705eca5894eb","Type":"ContainerStarted","Data":"38a64b9eb4fc0ec275e280a254818667bb3160f4c752ed36438c3b87f4552d5d"} Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.062357 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-wnqpc" podStartSLOduration=2.062339603 podStartE2EDuration="2.062339603s" podCreationTimestamp="2026-01-26 13:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:17:15.059401102 +0000 UTC m=+2011.992768724" watchObservedRunningTime="2026-01-26 13:17:15.062339603 +0000 UTC m=+2011.995707215" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.087020 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.164367 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d9656c78f-bv48c"] Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.187166 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d9656c78f-bv48c"] Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.242039 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54585cbbc-jk8bg"] Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.342684 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149ed01d-9763-4c6d-b17f-79b6e76b110f" path="/var/lib/kubelet/pods/149ed01d-9763-4c6d-b17f-79b6e76b110f/volumes" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.343171 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58298af3-1f5e-464f-9af7-70f300b48267" path="/var/lib/kubelet/pods/58298af3-1f5e-464f-9af7-70f300b48267/volumes" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.344961 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c999dbc67-cvzlp"] Jan 26 13:17:15 crc kubenswrapper[4844]: E0126 13:17:15.345295 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149ed01d-9763-4c6d-b17f-79b6e76b110f" containerName="init" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.345306 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="149ed01d-9763-4c6d-b17f-79b6e76b110f" containerName="init" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.345464 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="149ed01d-9763-4c6d-b17f-79b6e76b110f" containerName="init" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.346307 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.365901 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c999dbc67-cvzlp"] Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.404338 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-dns-svc\") pod \"dnsmasq-dns-7c999dbc67-cvzlp\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.404381 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-ovsdbserver-nb\") pod \"dnsmasq-dns-7c999dbc67-cvzlp\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.404410 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-config\") pod \"dnsmasq-dns-7c999dbc67-cvzlp\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.404450 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-ovsdbserver-sb\") pod \"dnsmasq-dns-7c999dbc67-cvzlp\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.404515 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn8rr\" (UniqueName: \"kubernetes.io/projected/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-kube-api-access-vn8rr\") pod \"dnsmasq-dns-7c999dbc67-cvzlp\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.509525 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn8rr\" (UniqueName: \"kubernetes.io/projected/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-kube-api-access-vn8rr\") pod \"dnsmasq-dns-7c999dbc67-cvzlp\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.509628 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-dns-svc\") pod \"dnsmasq-dns-7c999dbc67-cvzlp\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.509651 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-ovsdbserver-nb\") pod \"dnsmasq-dns-7c999dbc67-cvzlp\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.509675 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-config\") pod \"dnsmasq-dns-7c999dbc67-cvzlp\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.509714 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-ovsdbserver-sb\") pod \"dnsmasq-dns-7c999dbc67-cvzlp\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.510530 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-ovsdbserver-sb\") pod \"dnsmasq-dns-7c999dbc67-cvzlp\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.511469 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-dns-svc\") pod \"dnsmasq-dns-7c999dbc67-cvzlp\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.512126 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-ovsdbserver-nb\") pod \"dnsmasq-dns-7c999dbc67-cvzlp\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.512783 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-config\") pod \"dnsmasq-dns-7c999dbc67-cvzlp\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.543485 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn8rr\" (UniqueName: \"kubernetes.io/projected/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-kube-api-access-vn8rr\") pod \"dnsmasq-dns-7c999dbc67-cvzlp\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:15 crc kubenswrapper[4844]: W0126 13:17:15.642663 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0913fcd_1ca6_46f8_80a8_0c2ced36fea9.slice/crio-0fa79a2dd99c8ec69b08fbf6d7fcd8be0a752f62605c2241a7d5d2e69308e0d2 WatchSource:0}: Error finding container 0fa79a2dd99c8ec69b08fbf6d7fcd8be0a752f62605c2241a7d5d2e69308e0d2: Status 404 returned error can't find the container with id 0fa79a2dd99c8ec69b08fbf6d7fcd8be0a752f62605c2241a7d5d2e69308e0d2 Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.658584 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 13:17:15 crc kubenswrapper[4844]: I0126 13:17:15.676882 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.048237 4844 generic.go:334] "Generic (PLEG): container finished" podID="73dd3353-ef91-44cb-8772-fc2c7426c367" containerID="47fa05eec2fb30998a533567939e95af8f0e0bde972e7b544d9b796c3ecac43d" exitCode=0 Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.048287 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" event={"ID":"73dd3353-ef91-44cb-8772-fc2c7426c367","Type":"ContainerDied","Data":"47fa05eec2fb30998a533567939e95af8f0e0bde972e7b544d9b796c3ecac43d"} Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.050856 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9","Type":"ContainerStarted","Data":"0fa79a2dd99c8ec69b08fbf6d7fcd8be0a752f62605c2241a7d5d2e69308e0d2"} Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.062531 4844 generic.go:334] "Generic (PLEG): container finished" podID="02fbab67-06d4-40b7-a1d0-9bdb9cf91def" containerID="7cc425ca67b3d70d162e8c734005e35e8efc37350e821cdf6a6c3b43326764ba" exitCode=0 Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.063632 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" event={"ID":"02fbab67-06d4-40b7-a1d0-9bdb9cf91def","Type":"ContainerDied","Data":"7cc425ca67b3d70d162e8c734005e35e8efc37350e821cdf6a6c3b43326764ba"} Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.131802 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c999dbc67-cvzlp"] Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.511659 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.516547 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.520350 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.521416 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-fm7sp" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.521478 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.521491 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.535581 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 13:17:16 crc kubenswrapper[4844]: E0126 13:17:16.586169 4844 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 26 13:17:16 crc kubenswrapper[4844]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/73dd3353-ef91-44cb-8772-fc2c7426c367/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 26 13:17:16 crc kubenswrapper[4844]: > podSandboxID="167cbd9fe279413f2950f81ebdf49e0501179f58bd8dc8e7482e331e9391f5bd" Jan 26 13:17:16 crc kubenswrapper[4844]: E0126 13:17:16.586513 4844 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 13:17:16 crc kubenswrapper[4844]: container &Container{Name:dnsmasq-dns,Image:38.102.83.9:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59dh59h578h67chf9h6h5cch694h9ch677h67fh657h5bfh65dh67fhb8h68dh5dfhf9h55bhcfh84h698h549h5b9h59bh5c8h647h557h9dh57bh5d5q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jr98d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5c5d5c9f8f-9m69m_openstack(73dd3353-ef91-44cb-8772-fc2c7426c367): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/73dd3353-ef91-44cb-8772-fc2c7426c367/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 26 13:17:16 crc kubenswrapper[4844]: > logger="UnhandledError" Jan 26 13:17:16 crc kubenswrapper[4844]: E0126 13:17:16.589764 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/73dd3353-ef91-44cb-8772-fc2c7426c367/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" podUID="73dd3353-ef91-44cb-8772-fc2c7426c367" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.628212 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.628301 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k8w4\" (UniqueName: \"kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-kube-api-access-6k8w4\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.628349 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8606256a-c070-4b18-906b-a4557edd45e7-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.628399 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.628674 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8606256a-c070-4b18-906b-a4557edd45e7-cache\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.628703 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8606256a-c070-4b18-906b-a4557edd45e7-lock\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.739540 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.739868 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k8w4\" (UniqueName: \"kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-kube-api-access-6k8w4\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.739908 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8606256a-c070-4b18-906b-a4557edd45e7-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.739945 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.739968 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8606256a-c070-4b18-906b-a4557edd45e7-cache\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.739987 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8606256a-c070-4b18-906b-a4557edd45e7-lock\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.740532 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8606256a-c070-4b18-906b-a4557edd45e7-lock\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.740815 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: E0126 13:17:16.746264 4844 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 13:17:16 crc kubenswrapper[4844]: E0126 13:17:16.746302 4844 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 13:17:16 crc kubenswrapper[4844]: E0126 13:17:16.746359 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift podName:8606256a-c070-4b18-906b-a4557edd45e7 nodeName:}" failed. No retries permitted until 2026-01-26 13:17:17.246340134 +0000 UTC m=+2014.179707746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift") pod "swift-storage-0" (UID: "8606256a-c070-4b18-906b-a4557edd45e7") : configmap "swift-ring-files" not found Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.746857 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8606256a-c070-4b18-906b-a4557edd45e7-cache\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.758447 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8606256a-c070-4b18-906b-a4557edd45e7-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.768234 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k8w4\" (UniqueName: \"kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-kube-api-access-6k8w4\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.790988 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.842711 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.941803 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-ovsdbserver-sb\") pod \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.942358 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sffh7\" (UniqueName: \"kubernetes.io/projected/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-kube-api-access-sffh7\") pod \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.942498 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-ovsdbserver-nb\") pod \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.942683 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-config\") pod \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.942874 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-dns-svc\") pod \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\" (UID: \"02fbab67-06d4-40b7-a1d0-9bdb9cf91def\") " Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.958755 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-kube-api-access-sffh7" (OuterVolumeSpecName: "kube-api-access-sffh7") pod "02fbab67-06d4-40b7-a1d0-9bdb9cf91def" (UID: "02fbab67-06d4-40b7-a1d0-9bdb9cf91def"). InnerVolumeSpecName "kube-api-access-sffh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.981431 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "02fbab67-06d4-40b7-a1d0-9bdb9cf91def" (UID: "02fbab67-06d4-40b7-a1d0-9bdb9cf91def"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:16 crc kubenswrapper[4844]: I0126 13:17:16.991317 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "02fbab67-06d4-40b7-a1d0-9bdb9cf91def" (UID: "02fbab67-06d4-40b7-a1d0-9bdb9cf91def"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.002898 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-sscmz"] Jan 26 13:17:17 crc kubenswrapper[4844]: E0126 13:17:17.003312 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02fbab67-06d4-40b7-a1d0-9bdb9cf91def" containerName="init" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.003324 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="02fbab67-06d4-40b7-a1d0-9bdb9cf91def" containerName="init" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.003568 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="02fbab67-06d4-40b7-a1d0-9bdb9cf91def" containerName="init" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.004104 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "02fbab67-06d4-40b7-a1d0-9bdb9cf91def" (UID: "02fbab67-06d4-40b7-a1d0-9bdb9cf91def"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.006397 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.009348 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.009692 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.009926 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.021166 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-config" (OuterVolumeSpecName: "config") pod "02fbab67-06d4-40b7-a1d0-9bdb9cf91def" (UID: "02fbab67-06d4-40b7-a1d0-9bdb9cf91def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.044981 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.045010 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.045021 4844 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.045032 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.045041 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sffh7\" (UniqueName: \"kubernetes.io/projected/02fbab67-06d4-40b7-a1d0-9bdb9cf91def-kube-api-access-sffh7\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.057636 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-sscmz"] Jan 26 13:17:17 crc kubenswrapper[4844]: E0126 13:17:17.061184 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-kdt9f ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-kdt9f ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-sscmz" podUID="1925b73f-e610-41a2-a45b-00564b2265b5" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.072722 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dh9kj"] Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.073904 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.107949 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dh9kj"] Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.110913 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9","Type":"ContainerStarted","Data":"103447e6662048b1839cb33e654768cf324c8624bdbe3e3104a6e2ad111418c6"} Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.117932 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" event={"ID":"02fbab67-06d4-40b7-a1d0-9bdb9cf91def","Type":"ContainerDied","Data":"59b67f97fc7af67788f5a41574412f5a2f817bac335c7623c829f19b14b7a112"} Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.117965 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54585cbbc-jk8bg" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.117991 4844 scope.go:117] "RemoveContainer" containerID="7cc425ca67b3d70d162e8c734005e35e8efc37350e821cdf6a6c3b43326764ba" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.120464 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-sscmz"] Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.121412 4844 generic.go:334] "Generic (PLEG): container finished" podID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" containerID="812b8a02174bdb9d9317991bd7d045861aa6c7f61eafb34caa41e709bbbe6d17" exitCode=0 Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.121476 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11","Type":"ContainerDied","Data":"812b8a02174bdb9d9317991bd7d045861aa6c7f61eafb34caa41e709bbbe6d17"} Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.127539 4844 generic.go:334] "Generic (PLEG): container finished" podID="8461ccab-6d28-4df1-8fab-49cb84f6bfb9" containerID="ccc61abf034a4abb38fa7032c712fd040deb21601353a65ea423ddc22c6b9661" exitCode=0 Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.128018 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" event={"ID":"8461ccab-6d28-4df1-8fab-49cb84f6bfb9","Type":"ContainerDied","Data":"ccc61abf034a4abb38fa7032c712fd040deb21601353a65ea423ddc22c6b9661"} Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.128115 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" event={"ID":"8461ccab-6d28-4df1-8fab-49cb84f6bfb9","Type":"ContainerStarted","Data":"aa6e91dd658407e99b7f16d5095cf1111319803dedf167db8239c4ef8435e260"} Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.128186 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.151720 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1925b73f-e610-41a2-a45b-00564b2265b5-combined-ca-bundle\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.151856 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdt9f\" (UniqueName: \"kubernetes.io/projected/1925b73f-e610-41a2-a45b-00564b2265b5-kube-api-access-kdt9f\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.152129 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1925b73f-e610-41a2-a45b-00564b2265b5-dispersionconf\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.152219 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1925b73f-e610-41a2-a45b-00564b2265b5-etc-swift\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.152273 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1925b73f-e610-41a2-a45b-00564b2265b5-ring-data-devices\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.152354 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1925b73f-e610-41a2-a45b-00564b2265b5-swiftconf\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.152463 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1925b73f-e610-41a2-a45b-00564b2265b5-scripts\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.156572 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.255481 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1925b73f-e610-41a2-a45b-00564b2265b5-dispersionconf\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.255531 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/82fe3a1a-10c2-4378-a36b-b42131a2df4d-etc-swift\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.255588 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1925b73f-e610-41a2-a45b-00564b2265b5-etc-swift\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.255630 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1925b73f-e610-41a2-a45b-00564b2265b5-ring-data-devices\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.255675 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1925b73f-e610-41a2-a45b-00564b2265b5-swiftconf\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.255750 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1925b73f-e610-41a2-a45b-00564b2265b5-scripts\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.255796 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fe3a1a-10c2-4378-a36b-b42131a2df4d-combined-ca-bundle\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.255822 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/82fe3a1a-10c2-4378-a36b-b42131a2df4d-ring-data-devices\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.255853 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/82fe3a1a-10c2-4378-a36b-b42131a2df4d-swiftconf\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.255965 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/82fe3a1a-10c2-4378-a36b-b42131a2df4d-dispersionconf\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.256143 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82fe3a1a-10c2-4378-a36b-b42131a2df4d-scripts\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.256175 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd4dl\" (UniqueName: \"kubernetes.io/projected/82fe3a1a-10c2-4378-a36b-b42131a2df4d-kube-api-access-zd4dl\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.256206 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.256266 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1925b73f-e610-41a2-a45b-00564b2265b5-combined-ca-bundle\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.256307 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdt9f\" (UniqueName: \"kubernetes.io/projected/1925b73f-e610-41a2-a45b-00564b2265b5-kube-api-access-kdt9f\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: E0126 13:17:17.257432 4844 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 13:17:17 crc kubenswrapper[4844]: E0126 13:17:17.257454 4844 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 13:17:17 crc kubenswrapper[4844]: E0126 13:17:17.257485 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift podName:8606256a-c070-4b18-906b-a4557edd45e7 nodeName:}" failed. No retries permitted until 2026-01-26 13:17:18.257474237 +0000 UTC m=+2015.190841849 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift") pod "swift-storage-0" (UID: "8606256a-c070-4b18-906b-a4557edd45e7") : configmap "swift-ring-files" not found Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.262004 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1925b73f-e610-41a2-a45b-00564b2265b5-etc-swift\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.263976 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1925b73f-e610-41a2-a45b-00564b2265b5-dispersionconf\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.264554 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54585cbbc-jk8bg"] Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.265223 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1925b73f-e610-41a2-a45b-00564b2265b5-ring-data-devices\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.265345 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1925b73f-e610-41a2-a45b-00564b2265b5-scripts\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.271013 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1925b73f-e610-41a2-a45b-00564b2265b5-swiftconf\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.274721 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdt9f\" (UniqueName: \"kubernetes.io/projected/1925b73f-e610-41a2-a45b-00564b2265b5-kube-api-access-kdt9f\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.281070 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1925b73f-e610-41a2-a45b-00564b2265b5-combined-ca-bundle\") pod \"swift-ring-rebalance-sscmz\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.294767 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54585cbbc-jk8bg"] Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.336090 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02fbab67-06d4-40b7-a1d0-9bdb9cf91def" path="/var/lib/kubelet/pods/02fbab67-06d4-40b7-a1d0-9bdb9cf91def/volumes" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.358440 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1925b73f-e610-41a2-a45b-00564b2265b5-etc-swift\") pod \"1925b73f-e610-41a2-a45b-00564b2265b5\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.358487 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1925b73f-e610-41a2-a45b-00564b2265b5-dispersionconf\") pod \"1925b73f-e610-41a2-a45b-00564b2265b5\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.358540 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdt9f\" (UniqueName: \"kubernetes.io/projected/1925b73f-e610-41a2-a45b-00564b2265b5-kube-api-access-kdt9f\") pod \"1925b73f-e610-41a2-a45b-00564b2265b5\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.358635 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1925b73f-e610-41a2-a45b-00564b2265b5-scripts\") pod \"1925b73f-e610-41a2-a45b-00564b2265b5\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.358660 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1925b73f-e610-41a2-a45b-00564b2265b5-ring-data-devices\") pod \"1925b73f-e610-41a2-a45b-00564b2265b5\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.358701 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1925b73f-e610-41a2-a45b-00564b2265b5-swiftconf\") pod \"1925b73f-e610-41a2-a45b-00564b2265b5\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.358933 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82fe3a1a-10c2-4378-a36b-b42131a2df4d-scripts\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.358968 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd4dl\" (UniqueName: \"kubernetes.io/projected/82fe3a1a-10c2-4378-a36b-b42131a2df4d-kube-api-access-zd4dl\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.359038 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/82fe3a1a-10c2-4378-a36b-b42131a2df4d-etc-swift\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.359151 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fe3a1a-10c2-4378-a36b-b42131a2df4d-combined-ca-bundle\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.359186 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/82fe3a1a-10c2-4378-a36b-b42131a2df4d-ring-data-devices\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.359217 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/82fe3a1a-10c2-4378-a36b-b42131a2df4d-swiftconf\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.359237 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/82fe3a1a-10c2-4378-a36b-b42131a2df4d-dispersionconf\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.359293 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1925b73f-e610-41a2-a45b-00564b2265b5-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1925b73f-e610-41a2-a45b-00564b2265b5" (UID: "1925b73f-e610-41a2-a45b-00564b2265b5"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.359629 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1925b73f-e610-41a2-a45b-00564b2265b5-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "1925b73f-e610-41a2-a45b-00564b2265b5" (UID: "1925b73f-e610-41a2-a45b-00564b2265b5"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.360326 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82fe3a1a-10c2-4378-a36b-b42131a2df4d-scripts\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.360733 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1925b73f-e610-41a2-a45b-00564b2265b5-scripts" (OuterVolumeSpecName: "scripts") pod "1925b73f-e610-41a2-a45b-00564b2265b5" (UID: "1925b73f-e610-41a2-a45b-00564b2265b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.360805 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/82fe3a1a-10c2-4378-a36b-b42131a2df4d-etc-swift\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.360837 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/82fe3a1a-10c2-4378-a36b-b42131a2df4d-ring-data-devices\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.367881 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1925b73f-e610-41a2-a45b-00564b2265b5-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "1925b73f-e610-41a2-a45b-00564b2265b5" (UID: "1925b73f-e610-41a2-a45b-00564b2265b5"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.371753 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1925b73f-e610-41a2-a45b-00564b2265b5-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "1925b73f-e610-41a2-a45b-00564b2265b5" (UID: "1925b73f-e610-41a2-a45b-00564b2265b5"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.371959 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/82fe3a1a-10c2-4378-a36b-b42131a2df4d-swiftconf\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.372060 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1925b73f-e610-41a2-a45b-00564b2265b5-kube-api-access-kdt9f" (OuterVolumeSpecName: "kube-api-access-kdt9f") pod "1925b73f-e610-41a2-a45b-00564b2265b5" (UID: "1925b73f-e610-41a2-a45b-00564b2265b5"). InnerVolumeSpecName "kube-api-access-kdt9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.372096 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/82fe3a1a-10c2-4378-a36b-b42131a2df4d-dispersionconf\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.377644 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fe3a1a-10c2-4378-a36b-b42131a2df4d-combined-ca-bundle\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.384364 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd4dl\" (UniqueName: \"kubernetes.io/projected/82fe3a1a-10c2-4378-a36b-b42131a2df4d-kube-api-access-zd4dl\") pod \"swift-ring-rebalance-dh9kj\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.408076 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.460244 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1925b73f-e610-41a2-a45b-00564b2265b5-combined-ca-bundle\") pod \"1925b73f-e610-41a2-a45b-00564b2265b5\" (UID: \"1925b73f-e610-41a2-a45b-00564b2265b5\") " Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.460802 4844 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1925b73f-e610-41a2-a45b-00564b2265b5-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.460829 4844 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1925b73f-e610-41a2-a45b-00564b2265b5-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.460841 4844 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1925b73f-e610-41a2-a45b-00564b2265b5-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.460853 4844 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1925b73f-e610-41a2-a45b-00564b2265b5-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.460865 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdt9f\" (UniqueName: \"kubernetes.io/projected/1925b73f-e610-41a2-a45b-00564b2265b5-kube-api-access-kdt9f\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.460877 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1925b73f-e610-41a2-a45b-00564b2265b5-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.468252 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1925b73f-e610-41a2-a45b-00564b2265b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1925b73f-e610-41a2-a45b-00564b2265b5" (UID: "1925b73f-e610-41a2-a45b-00564b2265b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.562881 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1925b73f-e610-41a2-a45b-00564b2265b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:17 crc kubenswrapper[4844]: I0126 13:17:17.903510 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dh9kj"] Jan 26 13:17:18 crc kubenswrapper[4844]: I0126 13:17:18.140106 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a0913fcd-1ca6-46f8-80a8-0c2ced36fea9","Type":"ContainerStarted","Data":"c4fe7f5664ea893515c8df333949ba5ccd3292b4a0d35e8b725edf558bd45e7e"} Jan 26 13:17:18 crc kubenswrapper[4844]: I0126 13:17:18.140327 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 26 13:17:18 crc kubenswrapper[4844]: I0126 13:17:18.144820 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" event={"ID":"73dd3353-ef91-44cb-8772-fc2c7426c367","Type":"ContainerStarted","Data":"a23b3f8a9caddfe1e0c39a9ce02fc13173e267ed0db8d1b88d1c80207220e014"} Jan 26 13:17:18 crc kubenswrapper[4844]: I0126 13:17:18.145045 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" Jan 26 13:17:18 crc kubenswrapper[4844]: I0126 13:17:18.147554 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" event={"ID":"8461ccab-6d28-4df1-8fab-49cb84f6bfb9","Type":"ContainerStarted","Data":"f7e3cc9c08e0881f89f24682031a154c4b9f31edf9d85e7b83810a3951f774d4"} Jan 26 13:17:18 crc kubenswrapper[4844]: I0126 13:17:18.147716 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:18 crc kubenswrapper[4844]: I0126 13:17:18.149838 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dh9kj" event={"ID":"82fe3a1a-10c2-4378-a36b-b42131a2df4d","Type":"ContainerStarted","Data":"f3fe85d89a55bbc1e09fc4468daaa544b669482f4eaa658daf45058340205459"} Jan 26 13:17:18 crc kubenswrapper[4844]: I0126 13:17:18.149839 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sscmz" Jan 26 13:17:18 crc kubenswrapper[4844]: I0126 13:17:18.160053 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.207626442 podStartE2EDuration="4.160032631s" podCreationTimestamp="2026-01-26 13:17:14 +0000 UTC" firstStartedPulling="2026-01-26 13:17:15.645498964 +0000 UTC m=+2012.578866576" lastFinishedPulling="2026-01-26 13:17:16.597905153 +0000 UTC m=+2013.531272765" observedRunningTime="2026-01-26 13:17:18.156569998 +0000 UTC m=+2015.089937600" watchObservedRunningTime="2026-01-26 13:17:18.160032631 +0000 UTC m=+2015.093400273" Jan 26 13:17:18 crc kubenswrapper[4844]: I0126 13:17:18.177406 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" podStartSLOduration=3.177386827 podStartE2EDuration="3.177386827s" podCreationTimestamp="2026-01-26 13:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:17:18.176797674 +0000 UTC m=+2015.110165296" watchObservedRunningTime="2026-01-26 13:17:18.177386827 +0000 UTC m=+2015.110754449" Jan 26 13:17:18 crc kubenswrapper[4844]: I0126 13:17:18.219493 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" podStartSLOduration=5.219479307 podStartE2EDuration="5.219479307s" podCreationTimestamp="2026-01-26 13:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:17:18.212026339 +0000 UTC m=+2015.145393951" watchObservedRunningTime="2026-01-26 13:17:18.219479307 +0000 UTC m=+2015.152846919" Jan 26 13:17:18 crc kubenswrapper[4844]: I0126 13:17:18.261852 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-sscmz"] Jan 26 13:17:18 crc kubenswrapper[4844]: I0126 13:17:18.274170 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:18 crc kubenswrapper[4844]: E0126 13:17:18.274944 4844 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 13:17:18 crc kubenswrapper[4844]: E0126 13:17:18.274959 4844 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 13:17:18 crc kubenswrapper[4844]: E0126 13:17:18.274995 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift podName:8606256a-c070-4b18-906b-a4557edd45e7 nodeName:}" failed. No retries permitted until 2026-01-26 13:17:20.274981689 +0000 UTC m=+2017.208349301 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift") pod "swift-storage-0" (UID: "8606256a-c070-4b18-906b-a4557edd45e7") : configmap "swift-ring-files" not found Jan 26 13:17:18 crc kubenswrapper[4844]: I0126 13:17:18.278277 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-sscmz"] Jan 26 13:17:19 crc kubenswrapper[4844]: I0126 13:17:19.322543 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1925b73f-e610-41a2-a45b-00564b2265b5" path="/var/lib/kubelet/pods/1925b73f-e610-41a2-a45b-00564b2265b5/volumes" Jan 26 13:17:20 crc kubenswrapper[4844]: I0126 13:17:20.317918 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:20 crc kubenswrapper[4844]: E0126 13:17:20.318129 4844 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 13:17:20 crc kubenswrapper[4844]: E0126 13:17:20.318555 4844 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 13:17:20 crc kubenswrapper[4844]: E0126 13:17:20.318663 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift podName:8606256a-c070-4b18-906b-a4557edd45e7 nodeName:}" failed. No retries permitted until 2026-01-26 13:17:24.31863677 +0000 UTC m=+2021.252004382 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift") pod "swift-storage-0" (UID: "8606256a-c070-4b18-906b-a4557edd45e7") : configmap "swift-ring-files" not found Jan 26 13:17:21 crc kubenswrapper[4844]: I0126 13:17:21.219612 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dh9kj" event={"ID":"82fe3a1a-10c2-4378-a36b-b42131a2df4d","Type":"ContainerStarted","Data":"e86fa50d0fc8f99e0a8d9de22b61a8a6c18b967a05cd35364e008fce1def5aa1"} Jan 26 13:17:21 crc kubenswrapper[4844]: I0126 13:17:21.235195 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-dh9kj" podStartSLOduration=1.5475357889999999 podStartE2EDuration="4.235175009s" podCreationTimestamp="2026-01-26 13:17:17 +0000 UTC" firstStartedPulling="2026-01-26 13:17:17.904609774 +0000 UTC m=+2014.837977386" lastFinishedPulling="2026-01-26 13:17:20.592248964 +0000 UTC m=+2017.525616606" observedRunningTime="2026-01-26 13:17:21.232390552 +0000 UTC m=+2018.165758164" watchObservedRunningTime="2026-01-26 13:17:21.235175009 +0000 UTC m=+2018.168542631" Jan 26 13:17:21 crc kubenswrapper[4844]: I0126 13:17:21.783403 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 26 13:17:21 crc kubenswrapper[4844]: I0126 13:17:21.783455 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 26 13:17:21 crc kubenswrapper[4844]: I0126 13:17:21.894876 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.417766 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.705879 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-jq7ln"] Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.707053 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jq7ln" Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.721314 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jq7ln"] Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.733166 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-eca9-account-create-update-8q2q2"] Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.734289 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eca9-account-create-update-8q2q2" Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.736978 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.746083 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-eca9-account-create-update-8q2q2"] Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.826409 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.828577 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.876866 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xb5p\" (UniqueName: \"kubernetes.io/projected/babcb55b-51b8-4031-a9e6-49df01680aa5-kube-api-access-9xb5p\") pod \"keystone-db-create-jq7ln\" (UID: \"babcb55b-51b8-4031-a9e6-49df01680aa5\") " pod="openstack/keystone-db-create-jq7ln" Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.876960 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a80cb87d-d461-4f90-8727-d6958eb5dac2-operator-scripts\") pod \"keystone-eca9-account-create-update-8q2q2\" (UID: \"a80cb87d-d461-4f90-8727-d6958eb5dac2\") " pod="openstack/keystone-eca9-account-create-update-8q2q2" Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.877097 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babcb55b-51b8-4031-a9e6-49df01680aa5-operator-scripts\") pod \"keystone-db-create-jq7ln\" (UID: \"babcb55b-51b8-4031-a9e6-49df01680aa5\") " pod="openstack/keystone-db-create-jq7ln" Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.877155 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdpw9\" (UniqueName: \"kubernetes.io/projected/a80cb87d-d461-4f90-8727-d6958eb5dac2-kube-api-access-qdpw9\") pod \"keystone-eca9-account-create-update-8q2q2\" (UID: \"a80cb87d-d461-4f90-8727-d6958eb5dac2\") " pod="openstack/keystone-eca9-account-create-update-8q2q2" Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.898874 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-hstpj"] Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.900119 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hstpj" Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.901882 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hstpj"] Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.966155 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.980922 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babcb55b-51b8-4031-a9e6-49df01680aa5-operator-scripts\") pod \"keystone-db-create-jq7ln\" (UID: \"babcb55b-51b8-4031-a9e6-49df01680aa5\") " pod="openstack/keystone-db-create-jq7ln" Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.981022 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdpw9\" (UniqueName: \"kubernetes.io/projected/a80cb87d-d461-4f90-8727-d6958eb5dac2-kube-api-access-qdpw9\") pod \"keystone-eca9-account-create-update-8q2q2\" (UID: \"a80cb87d-d461-4f90-8727-d6958eb5dac2\") " pod="openstack/keystone-eca9-account-create-update-8q2q2" Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.981055 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xb5p\" (UniqueName: \"kubernetes.io/projected/babcb55b-51b8-4031-a9e6-49df01680aa5-kube-api-access-9xb5p\") pod \"keystone-db-create-jq7ln\" (UID: \"babcb55b-51b8-4031-a9e6-49df01680aa5\") " pod="openstack/keystone-db-create-jq7ln" Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.981113 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a80cb87d-d461-4f90-8727-d6958eb5dac2-operator-scripts\") pod \"keystone-eca9-account-create-update-8q2q2\" (UID: \"a80cb87d-d461-4f90-8727-d6958eb5dac2\") " pod="openstack/keystone-eca9-account-create-update-8q2q2" Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.982484 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a80cb87d-d461-4f90-8727-d6958eb5dac2-operator-scripts\") pod \"keystone-eca9-account-create-update-8q2q2\" (UID: \"a80cb87d-d461-4f90-8727-d6958eb5dac2\") " pod="openstack/keystone-eca9-account-create-update-8q2q2" Jan 26 13:17:22 crc kubenswrapper[4844]: I0126 13:17:22.985191 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babcb55b-51b8-4031-a9e6-49df01680aa5-operator-scripts\") pod \"keystone-db-create-jq7ln\" (UID: \"babcb55b-51b8-4031-a9e6-49df01680aa5\") " pod="openstack/keystone-db-create-jq7ln" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.007623 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xb5p\" (UniqueName: \"kubernetes.io/projected/babcb55b-51b8-4031-a9e6-49df01680aa5-kube-api-access-9xb5p\") pod \"keystone-db-create-jq7ln\" (UID: \"babcb55b-51b8-4031-a9e6-49df01680aa5\") " pod="openstack/keystone-db-create-jq7ln" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.011310 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdpw9\" (UniqueName: \"kubernetes.io/projected/a80cb87d-d461-4f90-8727-d6958eb5dac2-kube-api-access-qdpw9\") pod \"keystone-eca9-account-create-update-8q2q2\" (UID: \"a80cb87d-d461-4f90-8727-d6958eb5dac2\") " pod="openstack/keystone-eca9-account-create-update-8q2q2" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.028733 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-561e-account-create-update-9g4xg"] Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.029892 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-561e-account-create-update-9g4xg" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.032866 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.033884 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-561e-account-create-update-9g4xg"] Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.036358 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jq7ln" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.050474 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eca9-account-create-update-8q2q2" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.082632 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dl2q\" (UniqueName: \"kubernetes.io/projected/bceea47e-5bf5-412a-a8d9-9c50e01d4c76-kube-api-access-5dl2q\") pod \"placement-db-create-hstpj\" (UID: \"bceea47e-5bf5-412a-a8d9-9c50e01d4c76\") " pod="openstack/placement-db-create-hstpj" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.082710 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bceea47e-5bf5-412a-a8d9-9c50e01d4c76-operator-scripts\") pod \"placement-db-create-hstpj\" (UID: \"bceea47e-5bf5-412a-a8d9-9c50e01d4c76\") " pod="openstack/placement-db-create-hstpj" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.183828 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw7nz\" (UniqueName: \"kubernetes.io/projected/8ad24d6d-9838-4344-be0c-777f0c6c6246-kube-api-access-pw7nz\") pod \"placement-561e-account-create-update-9g4xg\" (UID: \"8ad24d6d-9838-4344-be0c-777f0c6c6246\") " pod="openstack/placement-561e-account-create-update-9g4xg" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.183887 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dl2q\" (UniqueName: \"kubernetes.io/projected/bceea47e-5bf5-412a-a8d9-9c50e01d4c76-kube-api-access-5dl2q\") pod \"placement-db-create-hstpj\" (UID: \"bceea47e-5bf5-412a-a8d9-9c50e01d4c76\") " pod="openstack/placement-db-create-hstpj" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.183949 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bceea47e-5bf5-412a-a8d9-9c50e01d4c76-operator-scripts\") pod \"placement-db-create-hstpj\" (UID: \"bceea47e-5bf5-412a-a8d9-9c50e01d4c76\") " pod="openstack/placement-db-create-hstpj" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.183971 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ad24d6d-9838-4344-be0c-777f0c6c6246-operator-scripts\") pod \"placement-561e-account-create-update-9g4xg\" (UID: \"8ad24d6d-9838-4344-be0c-777f0c6c6246\") " pod="openstack/placement-561e-account-create-update-9g4xg" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.184679 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bceea47e-5bf5-412a-a8d9-9c50e01d4c76-operator-scripts\") pod \"placement-db-create-hstpj\" (UID: \"bceea47e-5bf5-412a-a8d9-9c50e01d4c76\") " pod="openstack/placement-db-create-hstpj" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.201023 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dl2q\" (UniqueName: \"kubernetes.io/projected/bceea47e-5bf5-412a-a8d9-9c50e01d4c76-kube-api-access-5dl2q\") pod \"placement-db-create-hstpj\" (UID: \"bceea47e-5bf5-412a-a8d9-9c50e01d4c76\") " pod="openstack/placement-db-create-hstpj" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.224289 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hstpj" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.285026 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw7nz\" (UniqueName: \"kubernetes.io/projected/8ad24d6d-9838-4344-be0c-777f0c6c6246-kube-api-access-pw7nz\") pod \"placement-561e-account-create-update-9g4xg\" (UID: \"8ad24d6d-9838-4344-be0c-777f0c6c6246\") " pod="openstack/placement-561e-account-create-update-9g4xg" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.285143 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ad24d6d-9838-4344-be0c-777f0c6c6246-operator-scripts\") pod \"placement-561e-account-create-update-9g4xg\" (UID: \"8ad24d6d-9838-4344-be0c-777f0c6c6246\") " pod="openstack/placement-561e-account-create-update-9g4xg" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.285868 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ad24d6d-9838-4344-be0c-777f0c6c6246-operator-scripts\") pod \"placement-561e-account-create-update-9g4xg\" (UID: \"8ad24d6d-9838-4344-be0c-777f0c6c6246\") " pod="openstack/placement-561e-account-create-update-9g4xg" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.302109 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw7nz\" (UniqueName: \"kubernetes.io/projected/8ad24d6d-9838-4344-be0c-777f0c6c6246-kube-api-access-pw7nz\") pod \"placement-561e-account-create-update-9g4xg\" (UID: \"8ad24d6d-9838-4344-be0c-777f0c6c6246\") " pod="openstack/placement-561e-account-create-update-9g4xg" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.381139 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.393051 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-561e-account-create-update-9g4xg" Jan 26 13:17:23 crc kubenswrapper[4844]: I0126 13:17:23.895792 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" Jan 26 13:17:24 crc kubenswrapper[4844]: I0126 13:17:24.403652 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:24 crc kubenswrapper[4844]: E0126 13:17:24.403769 4844 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 13:17:24 crc kubenswrapper[4844]: E0126 13:17:24.404094 4844 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 13:17:24 crc kubenswrapper[4844]: E0126 13:17:24.404219 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift podName:8606256a-c070-4b18-906b-a4557edd45e7 nodeName:}" failed. No retries permitted until 2026-01-26 13:17:32.404203659 +0000 UTC m=+2029.337571271 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift") pod "swift-storage-0" (UID: "8606256a-c070-4b18-906b-a4557edd45e7") : configmap "swift-ring-files" not found Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.041730 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-lrffj"] Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.042743 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-lrffj" Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.050257 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-lrffj"] Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.116474 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtkpb\" (UniqueName: \"kubernetes.io/projected/6fb93e78-de86-442b-b44d-6b3281ca3618-kube-api-access-wtkpb\") pod \"watcher-db-create-lrffj\" (UID: \"6fb93e78-de86-442b-b44d-6b3281ca3618\") " pod="openstack/watcher-db-create-lrffj" Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.116842 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fb93e78-de86-442b-b44d-6b3281ca3618-operator-scripts\") pod \"watcher-db-create-lrffj\" (UID: \"6fb93e78-de86-442b-b44d-6b3281ca3618\") " pod="openstack/watcher-db-create-lrffj" Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.141020 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-ab2d-account-create-update-fjgzg"] Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.142064 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-ab2d-account-create-update-fjgzg" Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.145875 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.221144 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fb93e78-de86-442b-b44d-6b3281ca3618-operator-scripts\") pod \"watcher-db-create-lrffj\" (UID: \"6fb93e78-de86-442b-b44d-6b3281ca3618\") " pod="openstack/watcher-db-create-lrffj" Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.221230 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/185bd916-a6be-4d5f-851b-260ad742e54e-operator-scripts\") pod \"watcher-ab2d-account-create-update-fjgzg\" (UID: \"185bd916-a6be-4d5f-851b-260ad742e54e\") " pod="openstack/watcher-ab2d-account-create-update-fjgzg" Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.221259 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2chwf\" (UniqueName: \"kubernetes.io/projected/185bd916-a6be-4d5f-851b-260ad742e54e-kube-api-access-2chwf\") pod \"watcher-ab2d-account-create-update-fjgzg\" (UID: \"185bd916-a6be-4d5f-851b-260ad742e54e\") " pod="openstack/watcher-ab2d-account-create-update-fjgzg" Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.221286 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtkpb\" (UniqueName: \"kubernetes.io/projected/6fb93e78-de86-442b-b44d-6b3281ca3618-kube-api-access-wtkpb\") pod \"watcher-db-create-lrffj\" (UID: \"6fb93e78-de86-442b-b44d-6b3281ca3618\") " pod="openstack/watcher-db-create-lrffj" Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.222350 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fb93e78-de86-442b-b44d-6b3281ca3618-operator-scripts\") pod \"watcher-db-create-lrffj\" (UID: \"6fb93e78-de86-442b-b44d-6b3281ca3618\") " pod="openstack/watcher-db-create-lrffj" Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.234180 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-ab2d-account-create-update-fjgzg"] Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.265385 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtkpb\" (UniqueName: \"kubernetes.io/projected/6fb93e78-de86-442b-b44d-6b3281ca3618-kube-api-access-wtkpb\") pod \"watcher-db-create-lrffj\" (UID: \"6fb93e78-de86-442b-b44d-6b3281ca3618\") " pod="openstack/watcher-db-create-lrffj" Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.322732 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/185bd916-a6be-4d5f-851b-260ad742e54e-operator-scripts\") pod \"watcher-ab2d-account-create-update-fjgzg\" (UID: \"185bd916-a6be-4d5f-851b-260ad742e54e\") " pod="openstack/watcher-ab2d-account-create-update-fjgzg" Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.322784 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2chwf\" (UniqueName: \"kubernetes.io/projected/185bd916-a6be-4d5f-851b-260ad742e54e-kube-api-access-2chwf\") pod \"watcher-ab2d-account-create-update-fjgzg\" (UID: \"185bd916-a6be-4d5f-851b-260ad742e54e\") " pod="openstack/watcher-ab2d-account-create-update-fjgzg" Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.323617 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/185bd916-a6be-4d5f-851b-260ad742e54e-operator-scripts\") pod \"watcher-ab2d-account-create-update-fjgzg\" (UID: \"185bd916-a6be-4d5f-851b-260ad742e54e\") " pod="openstack/watcher-ab2d-account-create-update-fjgzg" Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.340846 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2chwf\" (UniqueName: \"kubernetes.io/projected/185bd916-a6be-4d5f-851b-260ad742e54e-kube-api-access-2chwf\") pod \"watcher-ab2d-account-create-update-fjgzg\" (UID: \"185bd916-a6be-4d5f-851b-260ad742e54e\") " pod="openstack/watcher-ab2d-account-create-update-fjgzg" Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.362802 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-lrffj" Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.457804 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-ab2d-account-create-update-fjgzg" Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.679891 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.740689 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5d5c9f8f-9m69m"] Jan 26 13:17:25 crc kubenswrapper[4844]: I0126 13:17:25.740916 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" podUID="73dd3353-ef91-44cb-8772-fc2c7426c367" containerName="dnsmasq-dns" containerID="cri-o://a23b3f8a9caddfe1e0c39a9ce02fc13173e267ed0db8d1b88d1c80207220e014" gracePeriod=10 Jan 26 13:17:28 crc kubenswrapper[4844]: I0126 13:17:28.895643 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" podUID="73dd3353-ef91-44cb-8772-fc2c7426c367" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: connect: connection refused" Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.358353 4844 generic.go:334] "Generic (PLEG): container finished" podID="73dd3353-ef91-44cb-8772-fc2c7426c367" containerID="a23b3f8a9caddfe1e0c39a9ce02fc13173e267ed0db8d1b88d1c80207220e014" exitCode=0 Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.358865 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" event={"ID":"73dd3353-ef91-44cb-8772-fc2c7426c367","Type":"ContainerDied","Data":"a23b3f8a9caddfe1e0c39a9ce02fc13173e267ed0db8d1b88d1c80207220e014"} Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.361111 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.442177 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr98d\" (UniqueName: \"kubernetes.io/projected/73dd3353-ef91-44cb-8772-fc2c7426c367-kube-api-access-jr98d\") pod \"73dd3353-ef91-44cb-8772-fc2c7426c367\" (UID: \"73dd3353-ef91-44cb-8772-fc2c7426c367\") " Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.442553 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73dd3353-ef91-44cb-8772-fc2c7426c367-ovsdbserver-nb\") pod \"73dd3353-ef91-44cb-8772-fc2c7426c367\" (UID: \"73dd3353-ef91-44cb-8772-fc2c7426c367\") " Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.442573 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73dd3353-ef91-44cb-8772-fc2c7426c367-dns-svc\") pod \"73dd3353-ef91-44cb-8772-fc2c7426c367\" (UID: \"73dd3353-ef91-44cb-8772-fc2c7426c367\") " Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.442624 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73dd3353-ef91-44cb-8772-fc2c7426c367-config\") pod \"73dd3353-ef91-44cb-8772-fc2c7426c367\" (UID: \"73dd3353-ef91-44cb-8772-fc2c7426c367\") " Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.453931 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73dd3353-ef91-44cb-8772-fc2c7426c367-kube-api-access-jr98d" (OuterVolumeSpecName: "kube-api-access-jr98d") pod "73dd3353-ef91-44cb-8772-fc2c7426c367" (UID: "73dd3353-ef91-44cb-8772-fc2c7426c367"). InnerVolumeSpecName "kube-api-access-jr98d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.495638 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73dd3353-ef91-44cb-8772-fc2c7426c367-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "73dd3353-ef91-44cb-8772-fc2c7426c367" (UID: "73dd3353-ef91-44cb-8772-fc2c7426c367"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.498692 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73dd3353-ef91-44cb-8772-fc2c7426c367-config" (OuterVolumeSpecName: "config") pod "73dd3353-ef91-44cb-8772-fc2c7426c367" (UID: "73dd3353-ef91-44cb-8772-fc2c7426c367"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.501202 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73dd3353-ef91-44cb-8772-fc2c7426c367-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "73dd3353-ef91-44cb-8772-fc2c7426c367" (UID: "73dd3353-ef91-44cb-8772-fc2c7426c367"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.544860 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73dd3353-ef91-44cb-8772-fc2c7426c367-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.544888 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr98d\" (UniqueName: \"kubernetes.io/projected/73dd3353-ef91-44cb-8772-fc2c7426c367-kube-api-access-jr98d\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.544900 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73dd3353-ef91-44cb-8772-fc2c7426c367-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.544907 4844 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73dd3353-ef91-44cb-8772-fc2c7426c367-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.706211 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jq7ln"] Jan 26 13:17:29 crc kubenswrapper[4844]: W0126 13:17:29.709604 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbabcb55b_51b8_4031_a9e6_49df01680aa5.slice/crio-64950a094b9b05350ff7bc209f4c1c6eb5459bad1dee5ad370c4c7edb7338c66 WatchSource:0}: Error finding container 64950a094b9b05350ff7bc209f4c1c6eb5459bad1dee5ad370c4c7edb7338c66: Status 404 returned error can't find the container with id 64950a094b9b05350ff7bc209f4c1c6eb5459bad1dee5ad370c4c7edb7338c66 Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.827796 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-ab2d-account-create-update-fjgzg"] Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.833198 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-lrffj"] Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.844385 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hstpj"] Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.863638 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-eca9-account-create-update-8q2q2"] Jan 26 13:17:29 crc kubenswrapper[4844]: I0126 13:17:29.971101 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-561e-account-create-update-9g4xg"] Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.077865 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-64v5w"] Jan 26 13:17:30 crc kubenswrapper[4844]: E0126 13:17:30.078179 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73dd3353-ef91-44cb-8772-fc2c7426c367" containerName="dnsmasq-dns" Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.078190 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="73dd3353-ef91-44cb-8772-fc2c7426c367" containerName="dnsmasq-dns" Jan 26 13:17:30 crc kubenswrapper[4844]: E0126 13:17:30.078211 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73dd3353-ef91-44cb-8772-fc2c7426c367" containerName="init" Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.078217 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="73dd3353-ef91-44cb-8772-fc2c7426c367" containerName="init" Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.078378 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="73dd3353-ef91-44cb-8772-fc2c7426c367" containerName="dnsmasq-dns" Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.078912 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-64v5w" Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.084047 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.091302 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-64v5w"] Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.152207 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h95z4\" (UniqueName: \"kubernetes.io/projected/bd50a54c-8553-4dfe-92bf-47acca1898ac-kube-api-access-h95z4\") pod \"root-account-create-update-64v5w\" (UID: \"bd50a54c-8553-4dfe-92bf-47acca1898ac\") " pod="openstack/root-account-create-update-64v5w" Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.152315 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd50a54c-8553-4dfe-92bf-47acca1898ac-operator-scripts\") pod \"root-account-create-update-64v5w\" (UID: \"bd50a54c-8553-4dfe-92bf-47acca1898ac\") " pod="openstack/root-account-create-update-64v5w" Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.254145 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h95z4\" (UniqueName: \"kubernetes.io/projected/bd50a54c-8553-4dfe-92bf-47acca1898ac-kube-api-access-h95z4\") pod \"root-account-create-update-64v5w\" (UID: \"bd50a54c-8553-4dfe-92bf-47acca1898ac\") " pod="openstack/root-account-create-update-64v5w" Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.254273 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd50a54c-8553-4dfe-92bf-47acca1898ac-operator-scripts\") pod \"root-account-create-update-64v5w\" (UID: \"bd50a54c-8553-4dfe-92bf-47acca1898ac\") " pod="openstack/root-account-create-update-64v5w" Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.255402 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd50a54c-8553-4dfe-92bf-47acca1898ac-operator-scripts\") pod \"root-account-create-update-64v5w\" (UID: \"bd50a54c-8553-4dfe-92bf-47acca1898ac\") " pod="openstack/root-account-create-update-64v5w" Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.273991 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h95z4\" (UniqueName: \"kubernetes.io/projected/bd50a54c-8553-4dfe-92bf-47acca1898ac-kube-api-access-h95z4\") pod \"root-account-create-update-64v5w\" (UID: \"bd50a54c-8553-4dfe-92bf-47acca1898ac\") " pod="openstack/root-account-create-update-64v5w" Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.372412 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" event={"ID":"73dd3353-ef91-44cb-8772-fc2c7426c367","Type":"ContainerDied","Data":"167cbd9fe279413f2950f81ebdf49e0501179f58bd8dc8e7482e331e9391f5bd"} Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.372494 4844 scope.go:117] "RemoveContainer" containerID="a23b3f8a9caddfe1e0c39a9ce02fc13173e267ed0db8d1b88d1c80207220e014" Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.372693 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5d5c9f8f-9m69m" Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.374582 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jq7ln" event={"ID":"babcb55b-51b8-4031-a9e6-49df01680aa5","Type":"ContainerStarted","Data":"64950a094b9b05350ff7bc209f4c1c6eb5459bad1dee5ad370c4c7edb7338c66"} Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.410733 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5d5c9f8f-9m69m"] Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.418200 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5d5c9f8f-9m69m"] Jan 26 13:17:30 crc kubenswrapper[4844]: I0126 13:17:30.441089 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-64v5w" Jan 26 13:17:31 crc kubenswrapper[4844]: I0126 13:17:31.325404 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73dd3353-ef91-44cb-8772-fc2c7426c367" path="/var/lib/kubelet/pods/73dd3353-ef91-44cb-8772-fc2c7426c367/volumes" Jan 26 13:17:32 crc kubenswrapper[4844]: W0126 13:17:32.032126 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fb93e78_de86_442b_b44d_6b3281ca3618.slice/crio-8d0a92af10e8c98c02748827e83adf83d64fd4f12f925dcf18492a20834585f5 WatchSource:0}: Error finding container 8d0a92af10e8c98c02748827e83adf83d64fd4f12f925dcf18492a20834585f5: Status 404 returned error can't find the container with id 8d0a92af10e8c98c02748827e83adf83d64fd4f12f925dcf18492a20834585f5 Jan 26 13:17:32 crc kubenswrapper[4844]: W0126 13:17:32.037330 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod185bd916_a6be_4d5f_851b_260ad742e54e.slice/crio-3950ef57be8ed6a32391c5dd63bbd588b11405e4bef12a56d7787a551103d683 WatchSource:0}: Error finding container 3950ef57be8ed6a32391c5dd63bbd588b11405e4bef12a56d7787a551103d683: Status 404 returned error can't find the container with id 3950ef57be8ed6a32391c5dd63bbd588b11405e4bef12a56d7787a551103d683 Jan 26 13:17:32 crc kubenswrapper[4844]: I0126 13:17:32.051941 4844 scope.go:117] "RemoveContainer" containerID="47fa05eec2fb30998a533567939e95af8f0e0bde972e7b544d9b796c3ecac43d" Jan 26 13:17:32 crc kubenswrapper[4844]: I0126 13:17:32.432353 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-561e-account-create-update-9g4xg" event={"ID":"8ad24d6d-9838-4344-be0c-777f0c6c6246","Type":"ContainerStarted","Data":"da7cb15616444d4ee0702fd77712a57f1becf6f8bcf36d102a99cff082eb4afe"} Jan 26 13:17:32 crc kubenswrapper[4844]: I0126 13:17:32.439233 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hstpj" event={"ID":"bceea47e-5bf5-412a-a8d9-9c50e01d4c76","Type":"ContainerStarted","Data":"dd48ed6069d91ba43199ac43c338a8bd496996d271da32abb77079d9cc006c9d"} Jan 26 13:17:32 crc kubenswrapper[4844]: I0126 13:17:32.440211 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-lrffj" event={"ID":"6fb93e78-de86-442b-b44d-6b3281ca3618","Type":"ContainerStarted","Data":"8d0a92af10e8c98c02748827e83adf83d64fd4f12f925dcf18492a20834585f5"} Jan 26 13:17:32 crc kubenswrapper[4844]: I0126 13:17:32.440895 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-eca9-account-create-update-8q2q2" event={"ID":"a80cb87d-d461-4f90-8727-d6958eb5dac2","Type":"ContainerStarted","Data":"3fb28fcb330203ace596325584ae4696805bc64a529208270bb9f4fba891db51"} Jan 26 13:17:32 crc kubenswrapper[4844]: I0126 13:17:32.441544 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-ab2d-account-create-update-fjgzg" event={"ID":"185bd916-a6be-4d5f-851b-260ad742e54e","Type":"ContainerStarted","Data":"3950ef57be8ed6a32391c5dd63bbd588b11405e4bef12a56d7787a551103d683"} Jan 26 13:17:32 crc kubenswrapper[4844]: I0126 13:17:32.491579 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:32 crc kubenswrapper[4844]: E0126 13:17:32.491753 4844 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 13:17:32 crc kubenswrapper[4844]: E0126 13:17:32.491774 4844 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 13:17:32 crc kubenswrapper[4844]: E0126 13:17:32.491824 4844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift podName:8606256a-c070-4b18-906b-a4557edd45e7 nodeName:}" failed. No retries permitted until 2026-01-26 13:17:48.491808083 +0000 UTC m=+2045.425175685 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift") pod "swift-storage-0" (UID: "8606256a-c070-4b18-906b-a4557edd45e7") : configmap "swift-ring-files" not found Jan 26 13:17:32 crc kubenswrapper[4844]: I0126 13:17:32.578523 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-64v5w"] Jan 26 13:17:32 crc kubenswrapper[4844]: W0126 13:17:32.586852 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd50a54c_8553_4dfe_92bf_47acca1898ac.slice/crio-ea9e9b69f2502a9e3dc467c31f2a8e71c3c6a25b7836a5bb9e8049515e6427f1 WatchSource:0}: Error finding container ea9e9b69f2502a9e3dc467c31f2a8e71c3c6a25b7836a5bb9e8049515e6427f1: Status 404 returned error can't find the container with id ea9e9b69f2502a9e3dc467c31f2a8e71c3c6a25b7836a5bb9e8049515e6427f1 Jan 26 13:17:33 crc kubenswrapper[4844]: I0126 13:17:33.455572 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-64v5w" event={"ID":"bd50a54c-8553-4dfe-92bf-47acca1898ac","Type":"ContainerStarted","Data":"48c908f51c718cfc35dcf190e6e8b770e5bf2784368ebc0fa2fc41dd8c86f055"} Jan 26 13:17:33 crc kubenswrapper[4844]: I0126 13:17:33.455958 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-64v5w" event={"ID":"bd50a54c-8553-4dfe-92bf-47acca1898ac","Type":"ContainerStarted","Data":"ea9e9b69f2502a9e3dc467c31f2a8e71c3c6a25b7836a5bb9e8049515e6427f1"} Jan 26 13:17:35 crc kubenswrapper[4844]: I0126 13:17:35.060108 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 26 13:17:35 crc kubenswrapper[4844]: I0126 13:17:35.474501 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jq7ln" event={"ID":"babcb55b-51b8-4031-a9e6-49df01680aa5","Type":"ContainerStarted","Data":"784225fe1aacf0f914c500b18da5c4ea54167172e85edc992bd755835d16030c"} Jan 26 13:17:35 crc kubenswrapper[4844]: I0126 13:17:35.488079 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-64v5w" podStartSLOduration=5.488059178 podStartE2EDuration="5.488059178s" podCreationTimestamp="2026-01-26 13:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:17:35.48690323 +0000 UTC m=+2032.420270852" watchObservedRunningTime="2026-01-26 13:17:35.488059178 +0000 UTC m=+2032.421426790" Jan 26 13:17:36 crc kubenswrapper[4844]: I0126 13:17:36.365308 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:17:36 crc kubenswrapper[4844]: I0126 13:17:36.365370 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:17:36 crc kubenswrapper[4844]: I0126 13:17:36.491214 4844 generic.go:334] "Generic (PLEG): container finished" podID="185bd916-a6be-4d5f-851b-260ad742e54e" containerID="8b04fcc51494e0b878c0902ec7083e55dd8b0a00193f973070a361bda6c60a24" exitCode=0 Jan 26 13:17:36 crc kubenswrapper[4844]: I0126 13:17:36.491351 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-ab2d-account-create-update-fjgzg" event={"ID":"185bd916-a6be-4d5f-851b-260ad742e54e","Type":"ContainerDied","Data":"8b04fcc51494e0b878c0902ec7083e55dd8b0a00193f973070a361bda6c60a24"} Jan 26 13:17:36 crc kubenswrapper[4844]: I0126 13:17:36.495422 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11","Type":"ContainerStarted","Data":"efcca5daec10fc6513622ff83f277886b1e5e79028c6b9d797a1139c0e30ac9b"} Jan 26 13:17:36 crc kubenswrapper[4844]: I0126 13:17:36.502699 4844 generic.go:334] "Generic (PLEG): container finished" podID="8ad24d6d-9838-4344-be0c-777f0c6c6246" containerID="bb28fb2eb48fb25eb9f1f034eb6ce340e7baa01fb38d54643c04c2815f25b5b8" exitCode=0 Jan 26 13:17:36 crc kubenswrapper[4844]: I0126 13:17:36.502758 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-561e-account-create-update-9g4xg" event={"ID":"8ad24d6d-9838-4344-be0c-777f0c6c6246","Type":"ContainerDied","Data":"bb28fb2eb48fb25eb9f1f034eb6ce340e7baa01fb38d54643c04c2815f25b5b8"} Jan 26 13:17:36 crc kubenswrapper[4844]: I0126 13:17:36.503886 4844 generic.go:334] "Generic (PLEG): container finished" podID="babcb55b-51b8-4031-a9e6-49df01680aa5" containerID="784225fe1aacf0f914c500b18da5c4ea54167172e85edc992bd755835d16030c" exitCode=0 Jan 26 13:17:36 crc kubenswrapper[4844]: I0126 13:17:36.503947 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jq7ln" event={"ID":"babcb55b-51b8-4031-a9e6-49df01680aa5","Type":"ContainerDied","Data":"784225fe1aacf0f914c500b18da5c4ea54167172e85edc992bd755835d16030c"} Jan 26 13:17:36 crc kubenswrapper[4844]: I0126 13:17:36.504943 4844 generic.go:334] "Generic (PLEG): container finished" podID="bd50a54c-8553-4dfe-92bf-47acca1898ac" containerID="48c908f51c718cfc35dcf190e6e8b770e5bf2784368ebc0fa2fc41dd8c86f055" exitCode=0 Jan 26 13:17:36 crc kubenswrapper[4844]: I0126 13:17:36.504986 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-64v5w" event={"ID":"bd50a54c-8553-4dfe-92bf-47acca1898ac","Type":"ContainerDied","Data":"48c908f51c718cfc35dcf190e6e8b770e5bf2784368ebc0fa2fc41dd8c86f055"} Jan 26 13:17:36 crc kubenswrapper[4844]: I0126 13:17:36.506060 4844 generic.go:334] "Generic (PLEG): container finished" podID="bceea47e-5bf5-412a-a8d9-9c50e01d4c76" containerID="3cdf1b00e1f4d43c8d3ed116513dd344d1931a3d900971060ca69f659e05ce90" exitCode=0 Jan 26 13:17:36 crc kubenswrapper[4844]: I0126 13:17:36.506107 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hstpj" event={"ID":"bceea47e-5bf5-412a-a8d9-9c50e01d4c76","Type":"ContainerDied","Data":"3cdf1b00e1f4d43c8d3ed116513dd344d1931a3d900971060ca69f659e05ce90"} Jan 26 13:17:36 crc kubenswrapper[4844]: I0126 13:17:36.507174 4844 generic.go:334] "Generic (PLEG): container finished" podID="6fb93e78-de86-442b-b44d-6b3281ca3618" containerID="8fa46a77ed651b1eb9404c5da7583979d0f8b9cf7c06b27dc98d255698e3464f" exitCode=0 Jan 26 13:17:36 crc kubenswrapper[4844]: I0126 13:17:36.507201 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-lrffj" event={"ID":"6fb93e78-de86-442b-b44d-6b3281ca3618","Type":"ContainerDied","Data":"8fa46a77ed651b1eb9404c5da7583979d0f8b9cf7c06b27dc98d255698e3464f"} Jan 26 13:17:36 crc kubenswrapper[4844]: I0126 13:17:36.508809 4844 generic.go:334] "Generic (PLEG): container finished" podID="a80cb87d-d461-4f90-8727-d6958eb5dac2" containerID="930b3b4675cfc68af0f2bc5357fa1c12aea62c99fd40c4fee09bcc2da4fdeb7d" exitCode=0 Jan 26 13:17:36 crc kubenswrapper[4844]: I0126 13:17:36.508843 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-eca9-account-create-update-8q2q2" event={"ID":"a80cb87d-d461-4f90-8727-d6958eb5dac2","Type":"ContainerDied","Data":"930b3b4675cfc68af0f2bc5357fa1c12aea62c99fd40c4fee09bcc2da4fdeb7d"} Jan 26 13:17:37 crc kubenswrapper[4844]: I0126 13:17:37.520003 4844 generic.go:334] "Generic (PLEG): container finished" podID="82fe3a1a-10c2-4378-a36b-b42131a2df4d" containerID="e86fa50d0fc8f99e0a8d9de22b61a8a6c18b967a05cd35364e008fce1def5aa1" exitCode=0 Jan 26 13:17:37 crc kubenswrapper[4844]: I0126 13:17:37.520137 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dh9kj" event={"ID":"82fe3a1a-10c2-4378-a36b-b42131a2df4d","Type":"ContainerDied","Data":"e86fa50d0fc8f99e0a8d9de22b61a8a6c18b967a05cd35364e008fce1def5aa1"} Jan 26 13:17:37 crc kubenswrapper[4844]: I0126 13:17:37.945040 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-561e-account-create-update-9g4xg" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.011546 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ad24d6d-9838-4344-be0c-777f0c6c6246-operator-scripts\") pod \"8ad24d6d-9838-4344-be0c-777f0c6c6246\" (UID: \"8ad24d6d-9838-4344-be0c-777f0c6c6246\") " Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.011740 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw7nz\" (UniqueName: \"kubernetes.io/projected/8ad24d6d-9838-4344-be0c-777f0c6c6246-kube-api-access-pw7nz\") pod \"8ad24d6d-9838-4344-be0c-777f0c6c6246\" (UID: \"8ad24d6d-9838-4344-be0c-777f0c6c6246\") " Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.059584 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ad24d6d-9838-4344-be0c-777f0c6c6246-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ad24d6d-9838-4344-be0c-777f0c6c6246" (UID: "8ad24d6d-9838-4344-be0c-777f0c6c6246"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.113738 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ad24d6d-9838-4344-be0c-777f0c6c6246-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.157455 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ad24d6d-9838-4344-be0c-777f0c6c6246-kube-api-access-pw7nz" (OuterVolumeSpecName: "kube-api-access-pw7nz") pod "8ad24d6d-9838-4344-be0c-777f0c6c6246" (UID: "8ad24d6d-9838-4344-be0c-777f0c6c6246"). InnerVolumeSpecName "kube-api-access-pw7nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.215920 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw7nz\" (UniqueName: \"kubernetes.io/projected/8ad24d6d-9838-4344-be0c-777f0c6c6246-kube-api-access-pw7nz\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.253502 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eca9-account-create-update-8q2q2" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.260457 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jq7ln" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.264887 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-ab2d-account-create-update-fjgzg" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.269358 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-lrffj" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.278989 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-64v5w" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.284364 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hstpj" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.317450 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fb93e78-de86-442b-b44d-6b3281ca3618-operator-scripts\") pod \"6fb93e78-de86-442b-b44d-6b3281ca3618\" (UID: \"6fb93e78-de86-442b-b44d-6b3281ca3618\") " Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.317505 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2chwf\" (UniqueName: \"kubernetes.io/projected/185bd916-a6be-4d5f-851b-260ad742e54e-kube-api-access-2chwf\") pod \"185bd916-a6be-4d5f-851b-260ad742e54e\" (UID: \"185bd916-a6be-4d5f-851b-260ad742e54e\") " Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.317536 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a80cb87d-d461-4f90-8727-d6958eb5dac2-operator-scripts\") pod \"a80cb87d-d461-4f90-8727-d6958eb5dac2\" (UID: \"a80cb87d-d461-4f90-8727-d6958eb5dac2\") " Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.317579 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babcb55b-51b8-4031-a9e6-49df01680aa5-operator-scripts\") pod \"babcb55b-51b8-4031-a9e6-49df01680aa5\" (UID: \"babcb55b-51b8-4031-a9e6-49df01680aa5\") " Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.317679 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/185bd916-a6be-4d5f-851b-260ad742e54e-operator-scripts\") pod \"185bd916-a6be-4d5f-851b-260ad742e54e\" (UID: \"185bd916-a6be-4d5f-851b-260ad742e54e\") " Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.317709 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd50a54c-8553-4dfe-92bf-47acca1898ac-operator-scripts\") pod \"bd50a54c-8553-4dfe-92bf-47acca1898ac\" (UID: \"bd50a54c-8553-4dfe-92bf-47acca1898ac\") " Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.317848 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xb5p\" (UniqueName: \"kubernetes.io/projected/babcb55b-51b8-4031-a9e6-49df01680aa5-kube-api-access-9xb5p\") pod \"babcb55b-51b8-4031-a9e6-49df01680aa5\" (UID: \"babcb55b-51b8-4031-a9e6-49df01680aa5\") " Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.317890 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtkpb\" (UniqueName: \"kubernetes.io/projected/6fb93e78-de86-442b-b44d-6b3281ca3618-kube-api-access-wtkpb\") pod \"6fb93e78-de86-442b-b44d-6b3281ca3618\" (UID: \"6fb93e78-de86-442b-b44d-6b3281ca3618\") " Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.317937 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bceea47e-5bf5-412a-a8d9-9c50e01d4c76-operator-scripts\") pod \"bceea47e-5bf5-412a-a8d9-9c50e01d4c76\" (UID: \"bceea47e-5bf5-412a-a8d9-9c50e01d4c76\") " Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.317967 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h95z4\" (UniqueName: \"kubernetes.io/projected/bd50a54c-8553-4dfe-92bf-47acca1898ac-kube-api-access-h95z4\") pod \"bd50a54c-8553-4dfe-92bf-47acca1898ac\" (UID: \"bd50a54c-8553-4dfe-92bf-47acca1898ac\") " Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.317972 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a80cb87d-d461-4f90-8727-d6958eb5dac2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a80cb87d-d461-4f90-8727-d6958eb5dac2" (UID: "a80cb87d-d461-4f90-8727-d6958eb5dac2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.318026 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdpw9\" (UniqueName: \"kubernetes.io/projected/a80cb87d-d461-4f90-8727-d6958eb5dac2-kube-api-access-qdpw9\") pod \"a80cb87d-d461-4f90-8727-d6958eb5dac2\" (UID: \"a80cb87d-d461-4f90-8727-d6958eb5dac2\") " Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.318069 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dl2q\" (UniqueName: \"kubernetes.io/projected/bceea47e-5bf5-412a-a8d9-9c50e01d4c76-kube-api-access-5dl2q\") pod \"bceea47e-5bf5-412a-a8d9-9c50e01d4c76\" (UID: \"bceea47e-5bf5-412a-a8d9-9c50e01d4c76\") " Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.318463 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd50a54c-8553-4dfe-92bf-47acca1898ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd50a54c-8553-4dfe-92bf-47acca1898ac" (UID: "bd50a54c-8553-4dfe-92bf-47acca1898ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.318459 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/babcb55b-51b8-4031-a9e6-49df01680aa5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "babcb55b-51b8-4031-a9e6-49df01680aa5" (UID: "babcb55b-51b8-4031-a9e6-49df01680aa5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.318575 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a80cb87d-d461-4f90-8727-d6958eb5dac2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.318624 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babcb55b-51b8-4031-a9e6-49df01680aa5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.318640 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd50a54c-8553-4dfe-92bf-47acca1898ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.318974 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/185bd916-a6be-4d5f-851b-260ad742e54e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "185bd916-a6be-4d5f-851b-260ad742e54e" (UID: "185bd916-a6be-4d5f-851b-260ad742e54e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.319066 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bceea47e-5bf5-412a-a8d9-9c50e01d4c76-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bceea47e-5bf5-412a-a8d9-9c50e01d4c76" (UID: "bceea47e-5bf5-412a-a8d9-9c50e01d4c76"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.319498 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fb93e78-de86-442b-b44d-6b3281ca3618-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fb93e78-de86-442b-b44d-6b3281ca3618" (UID: "6fb93e78-de86-442b-b44d-6b3281ca3618"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.323945 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bceea47e-5bf5-412a-a8d9-9c50e01d4c76-kube-api-access-5dl2q" (OuterVolumeSpecName: "kube-api-access-5dl2q") pod "bceea47e-5bf5-412a-a8d9-9c50e01d4c76" (UID: "bceea47e-5bf5-412a-a8d9-9c50e01d4c76"). InnerVolumeSpecName "kube-api-access-5dl2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.324968 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/185bd916-a6be-4d5f-851b-260ad742e54e-kube-api-access-2chwf" (OuterVolumeSpecName: "kube-api-access-2chwf") pod "185bd916-a6be-4d5f-851b-260ad742e54e" (UID: "185bd916-a6be-4d5f-851b-260ad742e54e"). InnerVolumeSpecName "kube-api-access-2chwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.326251 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a80cb87d-d461-4f90-8727-d6958eb5dac2-kube-api-access-qdpw9" (OuterVolumeSpecName: "kube-api-access-qdpw9") pod "a80cb87d-d461-4f90-8727-d6958eb5dac2" (UID: "a80cb87d-d461-4f90-8727-d6958eb5dac2"). InnerVolumeSpecName "kube-api-access-qdpw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.326486 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb93e78-de86-442b-b44d-6b3281ca3618-kube-api-access-wtkpb" (OuterVolumeSpecName: "kube-api-access-wtkpb") pod "6fb93e78-de86-442b-b44d-6b3281ca3618" (UID: "6fb93e78-de86-442b-b44d-6b3281ca3618"). InnerVolumeSpecName "kube-api-access-wtkpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.326820 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/babcb55b-51b8-4031-a9e6-49df01680aa5-kube-api-access-9xb5p" (OuterVolumeSpecName: "kube-api-access-9xb5p") pod "babcb55b-51b8-4031-a9e6-49df01680aa5" (UID: "babcb55b-51b8-4031-a9e6-49df01680aa5"). InnerVolumeSpecName "kube-api-access-9xb5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.326962 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd50a54c-8553-4dfe-92bf-47acca1898ac-kube-api-access-h95z4" (OuterVolumeSpecName: "kube-api-access-h95z4") pod "bd50a54c-8553-4dfe-92bf-47acca1898ac" (UID: "bd50a54c-8553-4dfe-92bf-47acca1898ac"). InnerVolumeSpecName "kube-api-access-h95z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.419869 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xb5p\" (UniqueName: \"kubernetes.io/projected/babcb55b-51b8-4031-a9e6-49df01680aa5-kube-api-access-9xb5p\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.419909 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtkpb\" (UniqueName: \"kubernetes.io/projected/6fb93e78-de86-442b-b44d-6b3281ca3618-kube-api-access-wtkpb\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.419922 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bceea47e-5bf5-412a-a8d9-9c50e01d4c76-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.419934 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h95z4\" (UniqueName: \"kubernetes.io/projected/bd50a54c-8553-4dfe-92bf-47acca1898ac-kube-api-access-h95z4\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.419943 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdpw9\" (UniqueName: \"kubernetes.io/projected/a80cb87d-d461-4f90-8727-d6958eb5dac2-kube-api-access-qdpw9\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.419952 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dl2q\" (UniqueName: \"kubernetes.io/projected/bceea47e-5bf5-412a-a8d9-9c50e01d4c76-kube-api-access-5dl2q\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.419962 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fb93e78-de86-442b-b44d-6b3281ca3618-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.419970 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2chwf\" (UniqueName: \"kubernetes.io/projected/185bd916-a6be-4d5f-851b-260ad742e54e-kube-api-access-2chwf\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.419979 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/185bd916-a6be-4d5f-851b-260ad742e54e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.536151 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-ab2d-account-create-update-fjgzg" event={"ID":"185bd916-a6be-4d5f-851b-260ad742e54e","Type":"ContainerDied","Data":"3950ef57be8ed6a32391c5dd63bbd588b11405e4bef12a56d7787a551103d683"} Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.536230 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3950ef57be8ed6a32391c5dd63bbd588b11405e4bef12a56d7787a551103d683" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.536525 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-ab2d-account-create-update-fjgzg" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.539751 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-561e-account-create-update-9g4xg" event={"ID":"8ad24d6d-9838-4344-be0c-777f0c6c6246","Type":"ContainerDied","Data":"da7cb15616444d4ee0702fd77712a57f1becf6f8bcf36d102a99cff082eb4afe"} Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.539792 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-561e-account-create-update-9g4xg" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.539796 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da7cb15616444d4ee0702fd77712a57f1becf6f8bcf36d102a99cff082eb4afe" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.541564 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jq7ln" event={"ID":"babcb55b-51b8-4031-a9e6-49df01680aa5","Type":"ContainerDied","Data":"64950a094b9b05350ff7bc209f4c1c6eb5459bad1dee5ad370c4c7edb7338c66"} Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.541587 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64950a094b9b05350ff7bc209f4c1c6eb5459bad1dee5ad370c4c7edb7338c66" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.541594 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jq7ln" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.543201 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-64v5w" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.543213 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-64v5w" event={"ID":"bd50a54c-8553-4dfe-92bf-47acca1898ac","Type":"ContainerDied","Data":"ea9e9b69f2502a9e3dc467c31f2a8e71c3c6a25b7836a5bb9e8049515e6427f1"} Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.543263 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea9e9b69f2502a9e3dc467c31f2a8e71c3c6a25b7836a5bb9e8049515e6427f1" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.545139 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hstpj" event={"ID":"bceea47e-5bf5-412a-a8d9-9c50e01d4c76","Type":"ContainerDied","Data":"dd48ed6069d91ba43199ac43c338a8bd496996d271da32abb77079d9cc006c9d"} Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.545165 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hstpj" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.545222 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd48ed6069d91ba43199ac43c338a8bd496996d271da32abb77079d9cc006c9d" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.546886 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-lrffj" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.547066 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-lrffj" event={"ID":"6fb93e78-de86-442b-b44d-6b3281ca3618","Type":"ContainerDied","Data":"8d0a92af10e8c98c02748827e83adf83d64fd4f12f925dcf18492a20834585f5"} Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.547113 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d0a92af10e8c98c02748827e83adf83d64fd4f12f925dcf18492a20834585f5" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.548881 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-eca9-account-create-update-8q2q2" event={"ID":"a80cb87d-d461-4f90-8727-d6958eb5dac2","Type":"ContainerDied","Data":"3fb28fcb330203ace596325584ae4696805bc64a529208270bb9f4fba891db51"} Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.548899 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eca9-account-create-update-8q2q2" Jan 26 13:17:38 crc kubenswrapper[4844]: I0126 13:17:38.548907 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb28fcb330203ace596325584ae4696805bc64a529208270bb9f4fba891db51" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:38.967517 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.132749 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/82fe3a1a-10c2-4378-a36b-b42131a2df4d-swiftconf\") pod \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.132859 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd4dl\" (UniqueName: \"kubernetes.io/projected/82fe3a1a-10c2-4378-a36b-b42131a2df4d-kube-api-access-zd4dl\") pod \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.132922 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fe3a1a-10c2-4378-a36b-b42131a2df4d-combined-ca-bundle\") pod \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.132951 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82fe3a1a-10c2-4378-a36b-b42131a2df4d-scripts\") pod \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.133093 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/82fe3a1a-10c2-4378-a36b-b42131a2df4d-ring-data-devices\") pod \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.133151 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/82fe3a1a-10c2-4378-a36b-b42131a2df4d-dispersionconf\") pod \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.133302 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/82fe3a1a-10c2-4378-a36b-b42131a2df4d-etc-swift\") pod \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\" (UID: \"82fe3a1a-10c2-4378-a36b-b42131a2df4d\") " Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.133837 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82fe3a1a-10c2-4378-a36b-b42131a2df4d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "82fe3a1a-10c2-4378-a36b-b42131a2df4d" (UID: "82fe3a1a-10c2-4378-a36b-b42131a2df4d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.134092 4844 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/82fe3a1a-10c2-4378-a36b-b42131a2df4d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.134177 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82fe3a1a-10c2-4378-a36b-b42131a2df4d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "82fe3a1a-10c2-4378-a36b-b42131a2df4d" (UID: "82fe3a1a-10c2-4378-a36b-b42131a2df4d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.136783 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82fe3a1a-10c2-4378-a36b-b42131a2df4d-kube-api-access-zd4dl" (OuterVolumeSpecName: "kube-api-access-zd4dl") pod "82fe3a1a-10c2-4378-a36b-b42131a2df4d" (UID: "82fe3a1a-10c2-4378-a36b-b42131a2df4d"). InnerVolumeSpecName "kube-api-access-zd4dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.142286 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fe3a1a-10c2-4378-a36b-b42131a2df4d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "82fe3a1a-10c2-4378-a36b-b42131a2df4d" (UID: "82fe3a1a-10c2-4378-a36b-b42131a2df4d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.152210 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82fe3a1a-10c2-4378-a36b-b42131a2df4d-scripts" (OuterVolumeSpecName: "scripts") pod "82fe3a1a-10c2-4378-a36b-b42131a2df4d" (UID: "82fe3a1a-10c2-4378-a36b-b42131a2df4d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.154477 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fe3a1a-10c2-4378-a36b-b42131a2df4d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "82fe3a1a-10c2-4378-a36b-b42131a2df4d" (UID: "82fe3a1a-10c2-4378-a36b-b42131a2df4d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.160210 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fe3a1a-10c2-4378-a36b-b42131a2df4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82fe3a1a-10c2-4378-a36b-b42131a2df4d" (UID: "82fe3a1a-10c2-4378-a36b-b42131a2df4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.236539 4844 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/82fe3a1a-10c2-4378-a36b-b42131a2df4d-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.236627 4844 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/82fe3a1a-10c2-4378-a36b-b42131a2df4d-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.236648 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd4dl\" (UniqueName: \"kubernetes.io/projected/82fe3a1a-10c2-4378-a36b-b42131a2df4d-kube-api-access-zd4dl\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.236665 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fe3a1a-10c2-4378-a36b-b42131a2df4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.236685 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82fe3a1a-10c2-4378-a36b-b42131a2df4d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.236706 4844 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/82fe3a1a-10c2-4378-a36b-b42131a2df4d-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.472991 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-vnff8" podUID="6696649d-b30c-4ef9-beda-3cec75d656b4" containerName="ovn-controller" probeResult="failure" output=< Jan 26 13:17:39 crc kubenswrapper[4844]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 13:17:39 crc kubenswrapper[4844]: > Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.500324 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.501328 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bq8zv" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.557560 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11","Type":"ContainerStarted","Data":"23770756bac6e2e27fc3ea29bc8d5120e81ebf282f08a5a72f79803689aad412"} Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.560533 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dh9kj" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.560984 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dh9kj" event={"ID":"82fe3a1a-10c2-4378-a36b-b42131a2df4d","Type":"ContainerDied","Data":"f3fe85d89a55bbc1e09fc4468daaa544b669482f4eaa658daf45058340205459"} Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.560999 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3fe85d89a55bbc1e09fc4468daaa544b669482f4eaa658daf45058340205459" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.733988 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vnff8-config-whcnd"] Jan 26 13:17:39 crc kubenswrapper[4844]: E0126 13:17:39.734388 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82fe3a1a-10c2-4378-a36b-b42131a2df4d" containerName="swift-ring-rebalance" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.734413 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="82fe3a1a-10c2-4378-a36b-b42131a2df4d" containerName="swift-ring-rebalance" Jan 26 13:17:39 crc kubenswrapper[4844]: E0126 13:17:39.734422 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd50a54c-8553-4dfe-92bf-47acca1898ac" containerName="mariadb-account-create-update" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.734430 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd50a54c-8553-4dfe-92bf-47acca1898ac" containerName="mariadb-account-create-update" Jan 26 13:17:39 crc kubenswrapper[4844]: E0126 13:17:39.734448 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ad24d6d-9838-4344-be0c-777f0c6c6246" containerName="mariadb-account-create-update" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.734456 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad24d6d-9838-4344-be0c-777f0c6c6246" containerName="mariadb-account-create-update" Jan 26 13:17:39 crc kubenswrapper[4844]: E0126 13:17:39.734467 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bceea47e-5bf5-412a-a8d9-9c50e01d4c76" containerName="mariadb-database-create" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.734474 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="bceea47e-5bf5-412a-a8d9-9c50e01d4c76" containerName="mariadb-database-create" Jan 26 13:17:39 crc kubenswrapper[4844]: E0126 13:17:39.734494 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a80cb87d-d461-4f90-8727-d6958eb5dac2" containerName="mariadb-account-create-update" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.734501 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="a80cb87d-d461-4f90-8727-d6958eb5dac2" containerName="mariadb-account-create-update" Jan 26 13:17:39 crc kubenswrapper[4844]: E0126 13:17:39.734512 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="babcb55b-51b8-4031-a9e6-49df01680aa5" containerName="mariadb-database-create" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.734519 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="babcb55b-51b8-4031-a9e6-49df01680aa5" containerName="mariadb-database-create" Jan 26 13:17:39 crc kubenswrapper[4844]: E0126 13:17:39.734529 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185bd916-a6be-4d5f-851b-260ad742e54e" containerName="mariadb-account-create-update" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.734535 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="185bd916-a6be-4d5f-851b-260ad742e54e" containerName="mariadb-account-create-update" Jan 26 13:17:39 crc kubenswrapper[4844]: E0126 13:17:39.734555 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb93e78-de86-442b-b44d-6b3281ca3618" containerName="mariadb-database-create" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.734564 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb93e78-de86-442b-b44d-6b3281ca3618" containerName="mariadb-database-create" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.734767 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb93e78-de86-442b-b44d-6b3281ca3618" containerName="mariadb-database-create" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.734785 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="a80cb87d-d461-4f90-8727-d6958eb5dac2" containerName="mariadb-account-create-update" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.734795 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="bceea47e-5bf5-412a-a8d9-9c50e01d4c76" containerName="mariadb-database-create" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.734808 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd50a54c-8553-4dfe-92bf-47acca1898ac" containerName="mariadb-account-create-update" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.734819 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ad24d6d-9838-4344-be0c-777f0c6c6246" containerName="mariadb-account-create-update" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.734833 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="82fe3a1a-10c2-4378-a36b-b42131a2df4d" containerName="swift-ring-rebalance" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.734844 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="185bd916-a6be-4d5f-851b-260ad742e54e" containerName="mariadb-account-create-update" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.734855 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="babcb55b-51b8-4031-a9e6-49df01680aa5" containerName="mariadb-database-create" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.735645 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.738188 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.746493 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/acbbd724-f38b-483e-a620-200bc48c1656-var-run\") pod \"ovn-controller-vnff8-config-whcnd\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.746672 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acbbd724-f38b-483e-a620-200bc48c1656-scripts\") pod \"ovn-controller-vnff8-config-whcnd\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.746727 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/acbbd724-f38b-483e-a620-200bc48c1656-additional-scripts\") pod \"ovn-controller-vnff8-config-whcnd\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.746768 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp4vx\" (UniqueName: \"kubernetes.io/projected/acbbd724-f38b-483e-a620-200bc48c1656-kube-api-access-cp4vx\") pod \"ovn-controller-vnff8-config-whcnd\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.746791 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/acbbd724-f38b-483e-a620-200bc48c1656-var-log-ovn\") pod \"ovn-controller-vnff8-config-whcnd\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.746821 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/acbbd724-f38b-483e-a620-200bc48c1656-var-run-ovn\") pod \"ovn-controller-vnff8-config-whcnd\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.750162 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vnff8-config-whcnd"] Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.847812 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acbbd724-f38b-483e-a620-200bc48c1656-scripts\") pod \"ovn-controller-vnff8-config-whcnd\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.848255 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/acbbd724-f38b-483e-a620-200bc48c1656-additional-scripts\") pod \"ovn-controller-vnff8-config-whcnd\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.848385 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp4vx\" (UniqueName: \"kubernetes.io/projected/acbbd724-f38b-483e-a620-200bc48c1656-kube-api-access-cp4vx\") pod \"ovn-controller-vnff8-config-whcnd\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.848434 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/acbbd724-f38b-483e-a620-200bc48c1656-var-log-ovn\") pod \"ovn-controller-vnff8-config-whcnd\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.848484 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/acbbd724-f38b-483e-a620-200bc48c1656-var-run-ovn\") pod \"ovn-controller-vnff8-config-whcnd\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.848718 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/acbbd724-f38b-483e-a620-200bc48c1656-var-run\") pod \"ovn-controller-vnff8-config-whcnd\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.848788 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/acbbd724-f38b-483e-a620-200bc48c1656-var-log-ovn\") pod \"ovn-controller-vnff8-config-whcnd\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.848820 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/acbbd724-f38b-483e-a620-200bc48c1656-var-run\") pod \"ovn-controller-vnff8-config-whcnd\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.848800 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/acbbd724-f38b-483e-a620-200bc48c1656-var-run-ovn\") pod \"ovn-controller-vnff8-config-whcnd\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.849058 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/acbbd724-f38b-483e-a620-200bc48c1656-additional-scripts\") pod \"ovn-controller-vnff8-config-whcnd\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.849827 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acbbd724-f38b-483e-a620-200bc48c1656-scripts\") pod \"ovn-controller-vnff8-config-whcnd\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:39 crc kubenswrapper[4844]: I0126 13:17:39.867303 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp4vx\" (UniqueName: \"kubernetes.io/projected/acbbd724-f38b-483e-a620-200bc48c1656-kube-api-access-cp4vx\") pod \"ovn-controller-vnff8-config-whcnd\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:40 crc kubenswrapper[4844]: I0126 13:17:40.060923 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:40 crc kubenswrapper[4844]: I0126 13:17:40.665774 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vnff8-config-whcnd"] Jan 26 13:17:40 crc kubenswrapper[4844]: W0126 13:17:40.667068 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacbbd724_f38b_483e_a620_200bc48c1656.slice/crio-5995be1378b717bd92ef2d5995f0ce85949d149d2f30275a3778f7094708bc82 WatchSource:0}: Error finding container 5995be1378b717bd92ef2d5995f0ce85949d149d2f30275a3778f7094708bc82: Status 404 returned error can't find the container with id 5995be1378b717bd92ef2d5995f0ce85949d149d2f30275a3778f7094708bc82 Jan 26 13:17:41 crc kubenswrapper[4844]: I0126 13:17:41.468857 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-64v5w"] Jan 26 13:17:41 crc kubenswrapper[4844]: I0126 13:17:41.476528 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-64v5w"] Jan 26 13:17:41 crc kubenswrapper[4844]: I0126 13:17:41.578164 4844 generic.go:334] "Generic (PLEG): container finished" podID="acbbd724-f38b-483e-a620-200bc48c1656" containerID="bd32517abd4acb8935f148381ac2fddb1286aab021f5e612d6dcb8e9b83e200d" exitCode=0 Jan 26 13:17:41 crc kubenswrapper[4844]: I0126 13:17:41.578216 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vnff8-config-whcnd" event={"ID":"acbbd724-f38b-483e-a620-200bc48c1656","Type":"ContainerDied","Data":"bd32517abd4acb8935f148381ac2fddb1286aab021f5e612d6dcb8e9b83e200d"} Jan 26 13:17:41 crc kubenswrapper[4844]: I0126 13:17:41.578247 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vnff8-config-whcnd" event={"ID":"acbbd724-f38b-483e-a620-200bc48c1656","Type":"ContainerStarted","Data":"5995be1378b717bd92ef2d5995f0ce85949d149d2f30275a3778f7094708bc82"} Jan 26 13:17:42 crc kubenswrapper[4844]: I0126 13:17:42.600793 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11","Type":"ContainerStarted","Data":"7ba5f8dab8d1f3f349b996677a532276c226965b2cff2277607d9d52e04dc77e"} Jan 26 13:17:42 crc kubenswrapper[4844]: I0126 13:17:42.632408 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=26.615441619 podStartE2EDuration="1m7.632049423s" podCreationTimestamp="2026-01-26 13:16:35 +0000 UTC" firstStartedPulling="2026-01-26 13:17:01.355121234 +0000 UTC m=+1998.288488846" lastFinishedPulling="2026-01-26 13:17:42.371729038 +0000 UTC m=+2039.305096650" observedRunningTime="2026-01-26 13:17:42.620790363 +0000 UTC m=+2039.554158005" watchObservedRunningTime="2026-01-26 13:17:42.632049423 +0000 UTC m=+2039.565417045" Jan 26 13:17:42 crc kubenswrapper[4844]: I0126 13:17:42.948639 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.111066 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp4vx\" (UniqueName: \"kubernetes.io/projected/acbbd724-f38b-483e-a620-200bc48c1656-kube-api-access-cp4vx\") pod \"acbbd724-f38b-483e-a620-200bc48c1656\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.111362 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acbbd724-f38b-483e-a620-200bc48c1656-scripts\") pod \"acbbd724-f38b-483e-a620-200bc48c1656\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.111410 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/acbbd724-f38b-483e-a620-200bc48c1656-var-log-ovn\") pod \"acbbd724-f38b-483e-a620-200bc48c1656\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.111457 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/acbbd724-f38b-483e-a620-200bc48c1656-var-run\") pod \"acbbd724-f38b-483e-a620-200bc48c1656\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.111554 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/acbbd724-f38b-483e-a620-200bc48c1656-var-run-ovn\") pod \"acbbd724-f38b-483e-a620-200bc48c1656\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.111594 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbbd724-f38b-483e-a620-200bc48c1656-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "acbbd724-f38b-483e-a620-200bc48c1656" (UID: "acbbd724-f38b-483e-a620-200bc48c1656"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.111631 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/acbbd724-f38b-483e-a620-200bc48c1656-additional-scripts\") pod \"acbbd724-f38b-483e-a620-200bc48c1656\" (UID: \"acbbd724-f38b-483e-a620-200bc48c1656\") " Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.111672 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbbd724-f38b-483e-a620-200bc48c1656-var-run" (OuterVolumeSpecName: "var-run") pod "acbbd724-f38b-483e-a620-200bc48c1656" (UID: "acbbd724-f38b-483e-a620-200bc48c1656"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.111695 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbbd724-f38b-483e-a620-200bc48c1656-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "acbbd724-f38b-483e-a620-200bc48c1656" (UID: "acbbd724-f38b-483e-a620-200bc48c1656"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.112231 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acbbd724-f38b-483e-a620-200bc48c1656-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "acbbd724-f38b-483e-a620-200bc48c1656" (UID: "acbbd724-f38b-483e-a620-200bc48c1656"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.112456 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acbbd724-f38b-483e-a620-200bc48c1656-scripts" (OuterVolumeSpecName: "scripts") pod "acbbd724-f38b-483e-a620-200bc48c1656" (UID: "acbbd724-f38b-483e-a620-200bc48c1656"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.112487 4844 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/acbbd724-f38b-483e-a620-200bc48c1656-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.112557 4844 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/acbbd724-f38b-483e-a620-200bc48c1656-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.112573 4844 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/acbbd724-f38b-483e-a620-200bc48c1656-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.112635 4844 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/acbbd724-f38b-483e-a620-200bc48c1656-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.116711 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acbbd724-f38b-483e-a620-200bc48c1656-kube-api-access-cp4vx" (OuterVolumeSpecName: "kube-api-access-cp4vx") pod "acbbd724-f38b-483e-a620-200bc48c1656" (UID: "acbbd724-f38b-483e-a620-200bc48c1656"). InnerVolumeSpecName "kube-api-access-cp4vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.214589 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp4vx\" (UniqueName: \"kubernetes.io/projected/acbbd724-f38b-483e-a620-200bc48c1656-kube-api-access-cp4vx\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.214638 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acbbd724-f38b-483e-a620-200bc48c1656-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.335217 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd50a54c-8553-4dfe-92bf-47acca1898ac" path="/var/lib/kubelet/pods/bd50a54c-8553-4dfe-92bf-47acca1898ac/volumes" Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.609207 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vnff8-config-whcnd" Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.609203 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vnff8-config-whcnd" event={"ID":"acbbd724-f38b-483e-a620-200bc48c1656","Type":"ContainerDied","Data":"5995be1378b717bd92ef2d5995f0ce85949d149d2f30275a3778f7094708bc82"} Jan 26 13:17:43 crc kubenswrapper[4844]: I0126 13:17:43.609262 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5995be1378b717bd92ef2d5995f0ce85949d149d2f30275a3778f7094708bc82" Jan 26 13:17:44 crc kubenswrapper[4844]: I0126 13:17:44.052281 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-vnff8-config-whcnd"] Jan 26 13:17:44 crc kubenswrapper[4844]: I0126 13:17:44.064593 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-vnff8-config-whcnd"] Jan 26 13:17:44 crc kubenswrapper[4844]: I0126 13:17:44.469922 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-vnff8" Jan 26 13:17:44 crc kubenswrapper[4844]: I0126 13:17:44.617712 4844 generic.go:334] "Generic (PLEG): container finished" podID="e8e36a62-9367-4c94-9aff-de8e6166af27" containerID="8037333977f59346e11bb0d4d8078b561374ca9115b317429eb3ea0e2a3fc400" exitCode=0 Jan 26 13:17:44 crc kubenswrapper[4844]: I0126 13:17:44.617790 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e8e36a62-9367-4c94-9aff-de8e6166af27","Type":"ContainerDied","Data":"8037333977f59346e11bb0d4d8078b561374ca9115b317429eb3ea0e2a3fc400"} Jan 26 13:17:44 crc kubenswrapper[4844]: I0126 13:17:44.619396 4844 generic.go:334] "Generic (PLEG): container finished" podID="e48f1161-14d0-42c1-b6ac-bdb8bce26985" containerID="438ed061427135c543fb34c1f5a9679a2e6315a4f3935f61296d309523cd31e0" exitCode=0 Jan 26 13:17:44 crc kubenswrapper[4844]: I0126 13:17:44.619420 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e48f1161-14d0-42c1-b6ac-bdb8bce26985","Type":"ContainerDied","Data":"438ed061427135c543fb34c1f5a9679a2e6315a4f3935f61296d309523cd31e0"} Jan 26 13:17:45 crc kubenswrapper[4844]: I0126 13:17:45.328561 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acbbd724-f38b-483e-a620-200bc48c1656" path="/var/lib/kubelet/pods/acbbd724-f38b-483e-a620-200bc48c1656/volumes" Jan 26 13:17:45 crc kubenswrapper[4844]: I0126 13:17:45.628925 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e8e36a62-9367-4c94-9aff-de8e6166af27","Type":"ContainerStarted","Data":"2758d64ef9dfa428b02a999acaca19c0ab43f356ea26d72de994d5e96fc426e1"} Jan 26 13:17:45 crc kubenswrapper[4844]: I0126 13:17:45.629253 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:17:45 crc kubenswrapper[4844]: I0126 13:17:45.630628 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e48f1161-14d0-42c1-b6ac-bdb8bce26985","Type":"ContainerStarted","Data":"49224d76c481ef910732446c51b497a3bc7254c88cb8cd2720780911497c6963"} Jan 26 13:17:45 crc kubenswrapper[4844]: I0126 13:17:45.631338 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 13:17:45 crc kubenswrapper[4844]: I0126 13:17:45.632425 4844 generic.go:334] "Generic (PLEG): container finished" podID="185637e1-efed-452c-ba52-7688909bad2c" containerID="b9ba7092d058ca611541e96848fae9ae6e472b992eb4b97bdb6a21e93a6ff189" exitCode=0 Jan 26 13:17:45 crc kubenswrapper[4844]: I0126 13:17:45.632460 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"185637e1-efed-452c-ba52-7688909bad2c","Type":"ContainerDied","Data":"b9ba7092d058ca611541e96848fae9ae6e472b992eb4b97bdb6a21e93a6ff189"} Jan 26 13:17:45 crc kubenswrapper[4844]: I0126 13:17:45.654481 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.010547455 podStartE2EDuration="1m17.654467696s" podCreationTimestamp="2026-01-26 13:16:28 +0000 UTC" firstStartedPulling="2026-01-26 13:16:30.211833469 +0000 UTC m=+1967.145201081" lastFinishedPulling="2026-01-26 13:17:10.85575371 +0000 UTC m=+2007.789121322" observedRunningTime="2026-01-26 13:17:45.649363173 +0000 UTC m=+2042.582730785" watchObservedRunningTime="2026-01-26 13:17:45.654467696 +0000 UTC m=+2042.587835308" Jan 26 13:17:45 crc kubenswrapper[4844]: I0126 13:17:45.716300 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.732087474 podStartE2EDuration="1m18.716281479s" podCreationTimestamp="2026-01-26 13:16:27 +0000 UTC" firstStartedPulling="2026-01-26 13:16:29.880736425 +0000 UTC m=+1966.814104037" lastFinishedPulling="2026-01-26 13:17:10.86493041 +0000 UTC m=+2007.798298042" observedRunningTime="2026-01-26 13:17:45.691222268 +0000 UTC m=+2042.624589880" watchObservedRunningTime="2026-01-26 13:17:45.716281479 +0000 UTC m=+2042.649649091" Jan 26 13:17:46 crc kubenswrapper[4844]: I0126 13:17:46.410106 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:46 crc kubenswrapper[4844]: I0126 13:17:46.474339 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-s92kk"] Jan 26 13:17:46 crc kubenswrapper[4844]: E0126 13:17:46.474765 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acbbd724-f38b-483e-a620-200bc48c1656" containerName="ovn-config" Jan 26 13:17:46 crc kubenswrapper[4844]: I0126 13:17:46.474790 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="acbbd724-f38b-483e-a620-200bc48c1656" containerName="ovn-config" Jan 26 13:17:46 crc kubenswrapper[4844]: I0126 13:17:46.475009 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="acbbd724-f38b-483e-a620-200bc48c1656" containerName="ovn-config" Jan 26 13:17:46 crc kubenswrapper[4844]: I0126 13:17:46.475699 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s92kk" Jan 26 13:17:46 crc kubenswrapper[4844]: I0126 13:17:46.478074 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 26 13:17:46 crc kubenswrapper[4844]: I0126 13:17:46.483749 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-s92kk"] Jan 26 13:17:46 crc kubenswrapper[4844]: I0126 13:17:46.573364 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a4b367-08ab-438d-867c-dc0752837f18-operator-scripts\") pod \"root-account-create-update-s92kk\" (UID: \"08a4b367-08ab-438d-867c-dc0752837f18\") " pod="openstack/root-account-create-update-s92kk" Jan 26 13:17:46 crc kubenswrapper[4844]: I0126 13:17:46.573564 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpcxm\" (UniqueName: \"kubernetes.io/projected/08a4b367-08ab-438d-867c-dc0752837f18-kube-api-access-lpcxm\") pod \"root-account-create-update-s92kk\" (UID: \"08a4b367-08ab-438d-867c-dc0752837f18\") " pod="openstack/root-account-create-update-s92kk" Jan 26 13:17:46 crc kubenswrapper[4844]: I0126 13:17:46.641339 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"185637e1-efed-452c-ba52-7688909bad2c","Type":"ContainerStarted","Data":"f7f13d01b6bbc75194b2c89ac62b26aca5edcae5cfc46ec6231f93c7cd361428"} Jan 26 13:17:46 crc kubenswrapper[4844]: I0126 13:17:46.675163 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a4b367-08ab-438d-867c-dc0752837f18-operator-scripts\") pod \"root-account-create-update-s92kk\" (UID: \"08a4b367-08ab-438d-867c-dc0752837f18\") " pod="openstack/root-account-create-update-s92kk" Jan 26 13:17:46 crc kubenswrapper[4844]: I0126 13:17:46.675279 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpcxm\" (UniqueName: \"kubernetes.io/projected/08a4b367-08ab-438d-867c-dc0752837f18-kube-api-access-lpcxm\") pod \"root-account-create-update-s92kk\" (UID: \"08a4b367-08ab-438d-867c-dc0752837f18\") " pod="openstack/root-account-create-update-s92kk" Jan 26 13:17:46 crc kubenswrapper[4844]: I0126 13:17:46.676396 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a4b367-08ab-438d-867c-dc0752837f18-operator-scripts\") pod \"root-account-create-update-s92kk\" (UID: \"08a4b367-08ab-438d-867c-dc0752837f18\") " pod="openstack/root-account-create-update-s92kk" Jan 26 13:17:46 crc kubenswrapper[4844]: I0126 13:17:46.702104 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpcxm\" (UniqueName: \"kubernetes.io/projected/08a4b367-08ab-438d-867c-dc0752837f18-kube-api-access-lpcxm\") pod \"root-account-create-update-s92kk\" (UID: \"08a4b367-08ab-438d-867c-dc0752837f18\") " pod="openstack/root-account-create-update-s92kk" Jan 26 13:17:46 crc kubenswrapper[4844]: I0126 13:17:46.721085 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=38.417988812 podStartE2EDuration="1m18.721065356s" podCreationTimestamp="2026-01-26 13:16:28 +0000 UTC" firstStartedPulling="2026-01-26 13:16:30.552727177 +0000 UTC m=+1967.486094789" lastFinishedPulling="2026-01-26 13:17:10.855803691 +0000 UTC m=+2007.789171333" observedRunningTime="2026-01-26 13:17:46.699771564 +0000 UTC m=+2043.633139176" watchObservedRunningTime="2026-01-26 13:17:46.721065356 +0000 UTC m=+2043.654432968" Jan 26 13:17:46 crc kubenswrapper[4844]: I0126 13:17:46.793669 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s92kk" Jan 26 13:17:47 crc kubenswrapper[4844]: I0126 13:17:47.233998 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-s92kk"] Jan 26 13:17:47 crc kubenswrapper[4844]: I0126 13:17:47.648848 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s92kk" event={"ID":"08a4b367-08ab-438d-867c-dc0752837f18","Type":"ContainerStarted","Data":"639f15d96db1034f7986815012b0b59bdead7f638e6ae9a7c9744d4565964dff"} Jan 26 13:17:48 crc kubenswrapper[4844]: I0126 13:17:48.505659 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:48 crc kubenswrapper[4844]: I0126 13:17:48.526661 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8606256a-c070-4b18-906b-a4557edd45e7-etc-swift\") pod \"swift-storage-0\" (UID: \"8606256a-c070-4b18-906b-a4557edd45e7\") " pod="openstack/swift-storage-0" Jan 26 13:17:48 crc kubenswrapper[4844]: I0126 13:17:48.636096 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 13:17:49 crc kubenswrapper[4844]: I0126 13:17:49.294084 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 13:17:49 crc kubenswrapper[4844]: I0126 13:17:49.675896 4844 generic.go:334] "Generic (PLEG): container finished" podID="08a4b367-08ab-438d-867c-dc0752837f18" containerID="4bb5edb2d0e964fbcd8f310bd7609872e6ca5523f7e8900cb617a0f7b8254f07" exitCode=0 Jan 26 13:17:49 crc kubenswrapper[4844]: I0126 13:17:49.675977 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s92kk" event={"ID":"08a4b367-08ab-438d-867c-dc0752837f18","Type":"ContainerDied","Data":"4bb5edb2d0e964fbcd8f310bd7609872e6ca5523f7e8900cb617a0f7b8254f07"} Jan 26 13:17:49 crc kubenswrapper[4844]: I0126 13:17:49.679766 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8606256a-c070-4b18-906b-a4557edd45e7","Type":"ContainerStarted","Data":"ce9b6de2b7386f4e723dab98bfb9c95fa9cdd59244ff34cbb0414fb128501797"} Jan 26 13:17:49 crc kubenswrapper[4844]: I0126 13:17:49.971167 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:17:50 crc kubenswrapper[4844]: I0126 13:17:50.690051 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8606256a-c070-4b18-906b-a4557edd45e7","Type":"ContainerStarted","Data":"3fda4ea9aa4ad31e95df4be840b65c7bb4928fb4d81b938d54fb685d01236ddd"} Jan 26 13:17:50 crc kubenswrapper[4844]: I0126 13:17:50.690388 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8606256a-c070-4b18-906b-a4557edd45e7","Type":"ContainerStarted","Data":"44ab6659e4e99d52f7005cdd3d84b6019ade8c27f807f7f3388f3865c05b2943"} Jan 26 13:17:50 crc kubenswrapper[4844]: I0126 13:17:50.690406 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8606256a-c070-4b18-906b-a4557edd45e7","Type":"ContainerStarted","Data":"24cc0ededbc44bcfa6c41a6f31f0381e19b8d12b64e3f4ba9c73eb75b0d24dd1"} Jan 26 13:17:50 crc kubenswrapper[4844]: I0126 13:17:50.690422 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8606256a-c070-4b18-906b-a4557edd45e7","Type":"ContainerStarted","Data":"f99c34a4140f3bd86ca2cd0034e141e6cc46f1f84770acfcc118491ba9545ced"} Jan 26 13:17:51 crc kubenswrapper[4844]: I0126 13:17:51.077812 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s92kk" Jan 26 13:17:51 crc kubenswrapper[4844]: I0126 13:17:51.160812 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a4b367-08ab-438d-867c-dc0752837f18-operator-scripts\") pod \"08a4b367-08ab-438d-867c-dc0752837f18\" (UID: \"08a4b367-08ab-438d-867c-dc0752837f18\") " Jan 26 13:17:51 crc kubenswrapper[4844]: I0126 13:17:51.161143 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpcxm\" (UniqueName: \"kubernetes.io/projected/08a4b367-08ab-438d-867c-dc0752837f18-kube-api-access-lpcxm\") pod \"08a4b367-08ab-438d-867c-dc0752837f18\" (UID: \"08a4b367-08ab-438d-867c-dc0752837f18\") " Jan 26 13:17:51 crc kubenswrapper[4844]: I0126 13:17:51.168970 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08a4b367-08ab-438d-867c-dc0752837f18-kube-api-access-lpcxm" (OuterVolumeSpecName: "kube-api-access-lpcxm") pod "08a4b367-08ab-438d-867c-dc0752837f18" (UID: "08a4b367-08ab-438d-867c-dc0752837f18"). InnerVolumeSpecName "kube-api-access-lpcxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:51 crc kubenswrapper[4844]: I0126 13:17:51.168980 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08a4b367-08ab-438d-867c-dc0752837f18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08a4b367-08ab-438d-867c-dc0752837f18" (UID: "08a4b367-08ab-438d-867c-dc0752837f18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:51 crc kubenswrapper[4844]: I0126 13:17:51.263398 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a4b367-08ab-438d-867c-dc0752837f18-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:51 crc kubenswrapper[4844]: I0126 13:17:51.263445 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpcxm\" (UniqueName: \"kubernetes.io/projected/08a4b367-08ab-438d-867c-dc0752837f18-kube-api-access-lpcxm\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:51 crc kubenswrapper[4844]: I0126 13:17:51.410247 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:51 crc kubenswrapper[4844]: I0126 13:17:51.413316 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:51 crc kubenswrapper[4844]: I0126 13:17:51.701721 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s92kk" event={"ID":"08a4b367-08ab-438d-867c-dc0752837f18","Type":"ContainerDied","Data":"639f15d96db1034f7986815012b0b59bdead7f638e6ae9a7c9744d4565964dff"} Jan 26 13:17:51 crc kubenswrapper[4844]: I0126 13:17:51.701825 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="639f15d96db1034f7986815012b0b59bdead7f638e6ae9a7c9744d4565964dff" Jan 26 13:17:51 crc kubenswrapper[4844]: I0126 13:17:51.701935 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s92kk" Jan 26 13:17:51 crc kubenswrapper[4844]: I0126 13:17:51.706521 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8606256a-c070-4b18-906b-a4557edd45e7","Type":"ContainerStarted","Data":"6b31837d1e14726b8ab0acdc6228eef31f19aaec802edb383303d8e2eb20f045"} Jan 26 13:17:51 crc kubenswrapper[4844]: I0126 13:17:51.708546 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:52 crc kubenswrapper[4844]: I0126 13:17:52.727904 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8606256a-c070-4b18-906b-a4557edd45e7","Type":"ContainerStarted","Data":"2f1f38068c1007eb87dfbbc1bdd9f5a8bb044ee8ebd5a0077a2e859f4aca0f1c"} Jan 26 13:17:52 crc kubenswrapper[4844]: I0126 13:17:52.728234 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8606256a-c070-4b18-906b-a4557edd45e7","Type":"ContainerStarted","Data":"cc0b7bcfd2618b184884ef3db5c26bdb38297a4587938efced534b3e97fde884"} Jan 26 13:17:52 crc kubenswrapper[4844]: I0126 13:17:52.728254 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8606256a-c070-4b18-906b-a4557edd45e7","Type":"ContainerStarted","Data":"9bdfc04e8f4841074c737edb384a5bb072cab746474881315f6c8b4bb94094f7"} Jan 26 13:17:53 crc kubenswrapper[4844]: I0126 13:17:53.741282 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8606256a-c070-4b18-906b-a4557edd45e7","Type":"ContainerStarted","Data":"fa348e93af4893bd0834cac0ad741680b50f3d729138b6e17b033cfcbd459e50"} Jan 26 13:17:53 crc kubenswrapper[4844]: I0126 13:17:53.741737 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8606256a-c070-4b18-906b-a4557edd45e7","Type":"ContainerStarted","Data":"f66d8cda31459617facfb607fcdecdab10325cfbe29b4a92b10c3a7686b476a4"} Jan 26 13:17:53 crc kubenswrapper[4844]: I0126 13:17:53.741749 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8606256a-c070-4b18-906b-a4557edd45e7","Type":"ContainerStarted","Data":"065043ca2fdd922d03c797acd1ec56d7920871ab9b282c210094759b7e6c1be9"} Jan 26 13:17:54 crc kubenswrapper[4844]: I0126 13:17:54.298255 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:17:54 crc kubenswrapper[4844]: I0126 13:17:54.298512 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" containerName="prometheus" containerID="cri-o://efcca5daec10fc6513622ff83f277886b1e5e79028c6b9d797a1139c0e30ac9b" gracePeriod=600 Jan 26 13:17:54 crc kubenswrapper[4844]: I0126 13:17:54.299047 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" containerName="thanos-sidecar" containerID="cri-o://7ba5f8dab8d1f3f349b996677a532276c226965b2cff2277607d9d52e04dc77e" gracePeriod=600 Jan 26 13:17:54 crc kubenswrapper[4844]: I0126 13:17:54.299103 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" containerName="config-reloader" containerID="cri-o://23770756bac6e2e27fc3ea29bc8d5120e81ebf282f08a5a72f79803689aad412" gracePeriod=600 Jan 26 13:17:54 crc kubenswrapper[4844]: I0126 13:17:54.750529 4844 generic.go:334] "Generic (PLEG): container finished" podID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" containerID="7ba5f8dab8d1f3f349b996677a532276c226965b2cff2277607d9d52e04dc77e" exitCode=0 Jan 26 13:17:54 crc kubenswrapper[4844]: I0126 13:17:54.750856 4844 generic.go:334] "Generic (PLEG): container finished" podID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" containerID="23770756bac6e2e27fc3ea29bc8d5120e81ebf282f08a5a72f79803689aad412" exitCode=0 Jan 26 13:17:54 crc kubenswrapper[4844]: I0126 13:17:54.750866 4844 generic.go:334] "Generic (PLEG): container finished" podID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" containerID="efcca5daec10fc6513622ff83f277886b1e5e79028c6b9d797a1139c0e30ac9b" exitCode=0 Jan 26 13:17:54 crc kubenswrapper[4844]: I0126 13:17:54.750754 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11","Type":"ContainerDied","Data":"7ba5f8dab8d1f3f349b996677a532276c226965b2cff2277607d9d52e04dc77e"} Jan 26 13:17:54 crc kubenswrapper[4844]: I0126 13:17:54.750925 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11","Type":"ContainerDied","Data":"23770756bac6e2e27fc3ea29bc8d5120e81ebf282f08a5a72f79803689aad412"} Jan 26 13:17:54 crc kubenswrapper[4844]: I0126 13:17:54.750948 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11","Type":"ContainerDied","Data":"efcca5daec10fc6513622ff83f277886b1e5e79028c6b9d797a1139c0e30ac9b"} Jan 26 13:17:54 crc kubenswrapper[4844]: I0126 13:17:54.756874 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8606256a-c070-4b18-906b-a4557edd45e7","Type":"ContainerStarted","Data":"30164c20b1e9409ebe00c6b249da59fd1cb6deaf7c3fa2b324e04a522d19e9e0"} Jan 26 13:17:54 crc kubenswrapper[4844]: I0126 13:17:54.756910 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8606256a-c070-4b18-906b-a4557edd45e7","Type":"ContainerStarted","Data":"d74aea3045d5d8edee51f7dddbd656395c28a0245584e8daf1efd0c69cd5ec07"} Jan 26 13:17:54 crc kubenswrapper[4844]: I0126 13:17:54.756925 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8606256a-c070-4b18-906b-a4557edd45e7","Type":"ContainerStarted","Data":"7307286a719e05f0f23a5888ce87fe845dc88adf2d8ccf984043a82005dc4c2d"} Jan 26 13:17:54 crc kubenswrapper[4844]: I0126 13:17:54.756937 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8606256a-c070-4b18-906b-a4557edd45e7","Type":"ContainerStarted","Data":"e4e2a0d459e23f297310ed1b7a2a4f285e67430fcd37b603385aa04dfa457e6c"} Jan 26 13:17:54 crc kubenswrapper[4844]: I0126 13:17:54.805589 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.112706518 podStartE2EDuration="39.805564015s" podCreationTimestamp="2026-01-26 13:17:15 +0000 UTC" firstStartedPulling="2026-01-26 13:17:49.29967552 +0000 UTC m=+2046.233043142" lastFinishedPulling="2026-01-26 13:17:52.992533027 +0000 UTC m=+2049.925900639" observedRunningTime="2026-01-26 13:17:54.799078169 +0000 UTC m=+2051.732445791" watchObservedRunningTime="2026-01-26 13:17:54.805564015 +0000 UTC m=+2051.738931627" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.101122 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59b54f4c7-pjjl5"] Jan 26 13:17:55 crc kubenswrapper[4844]: E0126 13:17:55.101727 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a4b367-08ab-438d-867c-dc0752837f18" containerName="mariadb-account-create-update" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.115960 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a4b367-08ab-438d-867c-dc0752837f18" containerName="mariadb-account-create-update" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.116342 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a4b367-08ab-438d-867c-dc0752837f18" containerName="mariadb-account-create-update" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.117437 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.120427 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59b54f4c7-pjjl5"] Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.126165 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.232254 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-ovsdbserver-nb\") pod \"dnsmasq-dns-59b54f4c7-pjjl5\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.232353 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-dns-svc\") pod \"dnsmasq-dns-59b54f4c7-pjjl5\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.232469 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-config\") pod \"dnsmasq-dns-59b54f4c7-pjjl5\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.232494 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh5fn\" (UniqueName: \"kubernetes.io/projected/077f8c48-ae97-4f5d-89db-1ed90de5e904-kube-api-access-bh5fn\") pod \"dnsmasq-dns-59b54f4c7-pjjl5\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.232513 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-ovsdbserver-sb\") pod \"dnsmasq-dns-59b54f4c7-pjjl5\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.232536 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-dns-swift-storage-0\") pod \"dnsmasq-dns-59b54f4c7-pjjl5\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.268339 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.333029 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-config\") pod \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.333091 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-prometheus-metric-storage-rulefiles-1\") pod \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.333123 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-prometheus-metric-storage-rulefiles-2\") pod \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.333160 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-config-out\") pod \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.333223 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-prometheus-metric-storage-rulefiles-0\") pod \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.333245 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-thanos-prometheus-http-client-file\") pod \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.333818 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" (UID: "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.333909 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" (UID: "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.334150 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-web-config\") pod \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.334196 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-tls-assets\") pod \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.334272 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\") pod \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.334307 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkstg\" (UniqueName: \"kubernetes.io/projected/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-kube-api-access-gkstg\") pod \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\" (UID: \"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11\") " Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.334401 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" (UID: "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.334470 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-ovsdbserver-sb\") pod \"dnsmasq-dns-59b54f4c7-pjjl5\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.334506 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-dns-swift-storage-0\") pod \"dnsmasq-dns-59b54f4c7-pjjl5\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.334624 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-ovsdbserver-nb\") pod \"dnsmasq-dns-59b54f4c7-pjjl5\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.334658 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-dns-svc\") pod \"dnsmasq-dns-59b54f4c7-pjjl5\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.334738 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-config\") pod \"dnsmasq-dns-59b54f4c7-pjjl5\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.334763 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh5fn\" (UniqueName: \"kubernetes.io/projected/077f8c48-ae97-4f5d-89db-1ed90de5e904-kube-api-access-bh5fn\") pod \"dnsmasq-dns-59b54f4c7-pjjl5\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.334804 4844 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.334817 4844 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.334827 4844 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.335534 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-dns-swift-storage-0\") pod \"dnsmasq-dns-59b54f4c7-pjjl5\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.335702 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-ovsdbserver-nb\") pod \"dnsmasq-dns-59b54f4c7-pjjl5\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.336361 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-config\") pod \"dnsmasq-dns-59b54f4c7-pjjl5\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.337808 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-dns-svc\") pod \"dnsmasq-dns-59b54f4c7-pjjl5\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.338311 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-ovsdbserver-sb\") pod \"dnsmasq-dns-59b54f4c7-pjjl5\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.350395 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" (UID: "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.350483 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-kube-api-access-gkstg" (OuterVolumeSpecName: "kube-api-access-gkstg") pod "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" (UID: "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11"). InnerVolumeSpecName "kube-api-access-gkstg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.350623 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-config" (OuterVolumeSpecName: "config") pod "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" (UID: "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.350667 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-config-out" (OuterVolumeSpecName: "config-out") pod "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" (UID: "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.350758 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" (UID: "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.357529 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh5fn\" (UniqueName: \"kubernetes.io/projected/077f8c48-ae97-4f5d-89db-1ed90de5e904-kube-api-access-bh5fn\") pod \"dnsmasq-dns-59b54f4c7-pjjl5\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.371415 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" (UID: "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11"). InnerVolumeSpecName "pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.373086 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-web-config" (OuterVolumeSpecName: "web-config") pod "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" (UID: "66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.437790 4844 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.437822 4844 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-web-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.437832 4844 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.437855 4844 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\") on node \"crc\" " Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.437867 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkstg\" (UniqueName: \"kubernetes.io/projected/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-kube-api-access-gkstg\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.437877 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.437885 4844 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11-config-out\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.447176 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.463327 4844 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.463479 4844 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f") on node "crc" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.539899 4844 reconciler_common.go:293] "Volume detached for volume \"pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\") on node \"crc\" DevicePath \"\"" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.768758 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.768768 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11","Type":"ContainerDied","Data":"b446a782f03336c11f7449e8a89ce8fd5473e575977f9e7fc3903436d89c7f9b"} Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.768833 4844 scope.go:117] "RemoveContainer" containerID="7ba5f8dab8d1f3f349b996677a532276c226965b2cff2277607d9d52e04dc77e" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.801980 4844 scope.go:117] "RemoveContainer" containerID="23770756bac6e2e27fc3ea29bc8d5120e81ebf282f08a5a72f79803689aad412" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.809303 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.817011 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.822350 4844 scope.go:117] "RemoveContainer" containerID="efcca5daec10fc6513622ff83f277886b1e5e79028c6b9d797a1139c0e30ac9b" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.849936 4844 scope.go:117] "RemoveContainer" containerID="812b8a02174bdb9d9317991bd7d045861aa6c7f61eafb34caa41e709bbbe6d17" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.852452 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:17:55 crc kubenswrapper[4844]: E0126 13:17:55.852898 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" containerName="config-reloader" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.852924 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" containerName="config-reloader" Jan 26 13:17:55 crc kubenswrapper[4844]: E0126 13:17:55.852942 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" containerName="init-config-reloader" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.852951 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" containerName="init-config-reloader" Jan 26 13:17:55 crc kubenswrapper[4844]: E0126 13:17:55.852972 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" containerName="prometheus" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.852981 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" containerName="prometheus" Jan 26 13:17:55 crc kubenswrapper[4844]: E0126 13:17:55.853005 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" containerName="thanos-sidecar" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.853014 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" containerName="thanos-sidecar" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.853331 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" containerName="thanos-sidecar" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.853380 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" containerName="config-reloader" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.853425 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" containerName="prometheus" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.855440 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.864500 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-lh4xm" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.865023 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.865231 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.865316 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.865026 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.865587 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.865651 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.872221 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.874106 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.876563 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.919763 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59b54f4c7-pjjl5"] Jan 26 13:17:55 crc kubenswrapper[4844]: W0126 13:17:55.925961 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod077f8c48_ae97_4f5d_89db_1ed90de5e904.slice/crio-46a4ee6551ee2cfc8afd409d85a32621acd2e8401184f65b64e6dd38f8f1e36c WatchSource:0}: Error finding container 46a4ee6551ee2cfc8afd409d85a32621acd2e8401184f65b64e6dd38f8f1e36c: Status 404 returned error can't find the container with id 46a4ee6551ee2cfc8afd409d85a32621acd2e8401184f65b64e6dd38f8f1e36c Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.945682 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.945950 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.945976 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.946009 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.946033 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.946060 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.946089 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpfhz\" (UniqueName: \"kubernetes.io/projected/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-kube-api-access-mpfhz\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.946120 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.946142 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-config\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.946160 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.946180 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.946224 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:55 crc kubenswrapper[4844]: I0126 13:17:55.946269 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.047489 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.047557 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.047642 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.047676 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.047708 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.047738 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.047776 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.047812 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.047845 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.047883 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpfhz\" (UniqueName: \"kubernetes.io/projected/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-kube-api-access-mpfhz\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.047923 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.047960 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-config\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.047996 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.049546 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.049644 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.049813 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.053407 4844 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.053444 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/60456fde86fe7a040b59fc70316475c6486458b501f0e0cd47e77b114ad32f41/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.055649 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.055652 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-config\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.055675 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.056164 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.056427 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.056454 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.058158 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.058220 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.073569 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpfhz\" (UniqueName: \"kubernetes.io/projected/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-kube-api-access-mpfhz\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.097518 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\") pod \"prometheus-metric-storage-0\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.177868 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.781812 4844 generic.go:334] "Generic (PLEG): container finished" podID="077f8c48-ae97-4f5d-89db-1ed90de5e904" containerID="9d31016880287673cc6d24cb62a2939b55257378c538d6d96ec095337ec487a6" exitCode=0 Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.782155 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" event={"ID":"077f8c48-ae97-4f5d-89db-1ed90de5e904","Type":"ContainerDied","Data":"9d31016880287673cc6d24cb62a2939b55257378c538d6d96ec095337ec487a6"} Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.782194 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" event={"ID":"077f8c48-ae97-4f5d-89db-1ed90de5e904","Type":"ContainerStarted","Data":"46a4ee6551ee2cfc8afd409d85a32621acd2e8401184f65b64e6dd38f8f1e36c"} Jan 26 13:17:56 crc kubenswrapper[4844]: I0126 13:17:56.833253 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:17:57 crc kubenswrapper[4844]: I0126 13:17:57.323264 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11" path="/var/lib/kubelet/pods/66e1ffda-fb8c-4d12-b4aa-bc14e5adcc11/volumes" Jan 26 13:17:57 crc kubenswrapper[4844]: I0126 13:17:57.803306 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aefdcbbc-2ac1-43d5-b70c-26e89000ab98","Type":"ContainerStarted","Data":"66304c26ce77d93a8a1899a9f7eac51156441026be0ebb6f0d41ce1bc8e22f5a"} Jan 26 13:17:57 crc kubenswrapper[4844]: I0126 13:17:57.820261 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" event={"ID":"077f8c48-ae97-4f5d-89db-1ed90de5e904","Type":"ContainerStarted","Data":"d0131719df27005692e12b5c8786405ee2c17dc6fbe73ffc93404d227ca982ae"} Jan 26 13:17:57 crc kubenswrapper[4844]: I0126 13:17:57.820459 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:17:57 crc kubenswrapper[4844]: I0126 13:17:57.842163 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" podStartSLOduration=2.842142397 podStartE2EDuration="2.842142397s" podCreationTimestamp="2026-01-26 13:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:17:57.841218805 +0000 UTC m=+2054.774586437" watchObservedRunningTime="2026-01-26 13:17:57.842142397 +0000 UTC m=+2054.775510009" Jan 26 13:17:59 crc kubenswrapper[4844]: I0126 13:17:59.373311 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="e48f1161-14d0-42c1-b6ac-bdb8bce26985" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Jan 26 13:17:59 crc kubenswrapper[4844]: I0126 13:17:59.676124 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e8e36a62-9367-4c94-9aff-de8e6166af27" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 26 13:17:59 crc kubenswrapper[4844]: I0126 13:17:59.837239 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aefdcbbc-2ac1-43d5-b70c-26e89000ab98","Type":"ContainerStarted","Data":"ad4d7ee909f9a18453c4656d4bd6f78bf7e01fcb4dd6c1d698354d192e0704b2"} Jan 26 13:17:59 crc kubenswrapper[4844]: I0126 13:17:59.971714 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="185637e1-efed-452c-ba52-7688909bad2c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Jan 26 13:18:05 crc kubenswrapper[4844]: I0126 13:18:05.449767 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:18:05 crc kubenswrapper[4844]: I0126 13:18:05.511969 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c999dbc67-cvzlp"] Jan 26 13:18:05 crc kubenswrapper[4844]: I0126 13:18:05.512546 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" podUID="8461ccab-6d28-4df1-8fab-49cb84f6bfb9" containerName="dnsmasq-dns" containerID="cri-o://f7e3cc9c08e0881f89f24682031a154c4b9f31edf9d85e7b83810a3951f774d4" gracePeriod=10 Jan 26 13:18:05 crc kubenswrapper[4844]: I0126 13:18:05.678527 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" podUID="8461ccab-6d28-4df1-8fab-49cb84f6bfb9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: connect: connection refused" Jan 26 13:18:05 crc kubenswrapper[4844]: I0126 13:18:05.889139 4844 generic.go:334] "Generic (PLEG): container finished" podID="8461ccab-6d28-4df1-8fab-49cb84f6bfb9" containerID="f7e3cc9c08e0881f89f24682031a154c4b9f31edf9d85e7b83810a3951f774d4" exitCode=0 Jan 26 13:18:05 crc kubenswrapper[4844]: I0126 13:18:05.889177 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" event={"ID":"8461ccab-6d28-4df1-8fab-49cb84f6bfb9","Type":"ContainerDied","Data":"f7e3cc9c08e0881f89f24682031a154c4b9f31edf9d85e7b83810a3951f774d4"} Jan 26 13:18:05 crc kubenswrapper[4844]: I0126 13:18:05.889200 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" event={"ID":"8461ccab-6d28-4df1-8fab-49cb84f6bfb9","Type":"ContainerDied","Data":"aa6e91dd658407e99b7f16d5095cf1111319803dedf167db8239c4ef8435e260"} Jan 26 13:18:05 crc kubenswrapper[4844]: I0126 13:18:05.889210 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa6e91dd658407e99b7f16d5095cf1111319803dedf167db8239c4ef8435e260" Jan 26 13:18:05 crc kubenswrapper[4844]: I0126 13:18:05.952737 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.021625 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn8rr\" (UniqueName: \"kubernetes.io/projected/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-kube-api-access-vn8rr\") pod \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.021741 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-ovsdbserver-sb\") pod \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.021818 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-config\") pod \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.021846 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-dns-svc\") pod \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.021910 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-ovsdbserver-nb\") pod \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\" (UID: \"8461ccab-6d28-4df1-8fab-49cb84f6bfb9\") " Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.029953 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-kube-api-access-vn8rr" (OuterVolumeSpecName: "kube-api-access-vn8rr") pod "8461ccab-6d28-4df1-8fab-49cb84f6bfb9" (UID: "8461ccab-6d28-4df1-8fab-49cb84f6bfb9"). InnerVolumeSpecName "kube-api-access-vn8rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.063269 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8461ccab-6d28-4df1-8fab-49cb84f6bfb9" (UID: "8461ccab-6d28-4df1-8fab-49cb84f6bfb9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.064863 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-config" (OuterVolumeSpecName: "config") pod "8461ccab-6d28-4df1-8fab-49cb84f6bfb9" (UID: "8461ccab-6d28-4df1-8fab-49cb84f6bfb9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.067374 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8461ccab-6d28-4df1-8fab-49cb84f6bfb9" (UID: "8461ccab-6d28-4df1-8fab-49cb84f6bfb9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.070460 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8461ccab-6d28-4df1-8fab-49cb84f6bfb9" (UID: "8461ccab-6d28-4df1-8fab-49cb84f6bfb9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.123848 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.123938 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.123981 4844 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.123995 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.124008 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn8rr\" (UniqueName: \"kubernetes.io/projected/8461ccab-6d28-4df1-8fab-49cb84f6bfb9-kube-api-access-vn8rr\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.373404 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.373724 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.373765 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.374391 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8d2dd6bfcc6d48828fccc89734d561f1977038b1d62b9cafb05ed3131eb3a4b"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.374445 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://f8d2dd6bfcc6d48828fccc89734d561f1977038b1d62b9cafb05ed3131eb3a4b" gracePeriod=600 Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.899280 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="f8d2dd6bfcc6d48828fccc89734d561f1977038b1d62b9cafb05ed3131eb3a4b" exitCode=0 Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.899329 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"f8d2dd6bfcc6d48828fccc89734d561f1977038b1d62b9cafb05ed3131eb3a4b"} Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.899645 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d"} Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.899675 4844 scope.go:117] "RemoveContainer" containerID="8bd0e29f4f3396a7e270924b961a9b78ffe005995a8558400b22ba50617d8a7e" Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.901250 4844 generic.go:334] "Generic (PLEG): container finished" podID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" containerID="ad4d7ee909f9a18453c4656d4bd6f78bf7e01fcb4dd6c1d698354d192e0704b2" exitCode=0 Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.901315 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aefdcbbc-2ac1-43d5-b70c-26e89000ab98","Type":"ContainerDied","Data":"ad4d7ee909f9a18453c4656d4bd6f78bf7e01fcb4dd6c1d698354d192e0704b2"} Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.901331 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c999dbc67-cvzlp" Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.988432 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c999dbc67-cvzlp"] Jan 26 13:18:06 crc kubenswrapper[4844]: I0126 13:18:06.995428 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c999dbc67-cvzlp"] Jan 26 13:18:07 crc kubenswrapper[4844]: I0126 13:18:07.323951 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8461ccab-6d28-4df1-8fab-49cb84f6bfb9" path="/var/lib/kubelet/pods/8461ccab-6d28-4df1-8fab-49cb84f6bfb9/volumes" Jan 26 13:18:07 crc kubenswrapper[4844]: I0126 13:18:07.939945 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aefdcbbc-2ac1-43d5-b70c-26e89000ab98","Type":"ContainerStarted","Data":"5706603a7bdd9fb5cd16976e4ca7aca5c36f785505f27a1d5f949b08e7241b62"} Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.373148 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.668421 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-467jd"] Jan 26 13:18:09 crc kubenswrapper[4844]: E0126 13:18:09.668750 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8461ccab-6d28-4df1-8fab-49cb84f6bfb9" containerName="init" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.668761 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="8461ccab-6d28-4df1-8fab-49cb84f6bfb9" containerName="init" Jan 26 13:18:09 crc kubenswrapper[4844]: E0126 13:18:09.668778 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8461ccab-6d28-4df1-8fab-49cb84f6bfb9" containerName="dnsmasq-dns" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.668784 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="8461ccab-6d28-4df1-8fab-49cb84f6bfb9" containerName="dnsmasq-dns" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.668930 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="8461ccab-6d28-4df1-8fab-49cb84f6bfb9" containerName="dnsmasq-dns" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.669459 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-467jd" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.677016 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.680276 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-467jd"] Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.791097 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnc6h\" (UniqueName: \"kubernetes.io/projected/c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55-kube-api-access-mnc6h\") pod \"barbican-db-create-467jd\" (UID: \"c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55\") " pod="openstack/barbican-db-create-467jd" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.791353 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55-operator-scripts\") pod \"barbican-db-create-467jd\" (UID: \"c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55\") " pod="openstack/barbican-db-create-467jd" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.800105 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-81e2-account-create-update-8bfjh"] Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.801348 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81e2-account-create-update-8bfjh" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.803306 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.807818 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-81e2-account-create-update-8bfjh"] Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.857765 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-pgdkm"] Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.859543 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pgdkm" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.871495 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pgdkm"] Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.892631 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clssl\" (UniqueName: \"kubernetes.io/projected/c0033ca5-7b7d-464e-ba26-a59ca8f226fe-kube-api-access-clssl\") pod \"barbican-81e2-account-create-update-8bfjh\" (UID: \"c0033ca5-7b7d-464e-ba26-a59ca8f226fe\") " pod="openstack/barbican-81e2-account-create-update-8bfjh" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.892743 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55-operator-scripts\") pod \"barbican-db-create-467jd\" (UID: \"c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55\") " pod="openstack/barbican-db-create-467jd" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.892788 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnc6h\" (UniqueName: \"kubernetes.io/projected/c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55-kube-api-access-mnc6h\") pod \"barbican-db-create-467jd\" (UID: \"c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55\") " pod="openstack/barbican-db-create-467jd" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.892824 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0033ca5-7b7d-464e-ba26-a59ca8f226fe-operator-scripts\") pod \"barbican-81e2-account-create-update-8bfjh\" (UID: \"c0033ca5-7b7d-464e-ba26-a59ca8f226fe\") " pod="openstack/barbican-81e2-account-create-update-8bfjh" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.893545 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55-operator-scripts\") pod \"barbican-db-create-467jd\" (UID: \"c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55\") " pod="openstack/barbican-db-create-467jd" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.923394 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnc6h\" (UniqueName: \"kubernetes.io/projected/c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55-kube-api-access-mnc6h\") pod \"barbican-db-create-467jd\" (UID: \"c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55\") " pod="openstack/barbican-db-create-467jd" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.972800 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.992080 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-467jd" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.993640 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e08e4d13-48d4-434c-a816-b64d161f09be-operator-scripts\") pod \"cinder-db-create-pgdkm\" (UID: \"e08e4d13-48d4-434c-a816-b64d161f09be\") " pod="openstack/cinder-db-create-pgdkm" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.993714 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ln45\" (UniqueName: \"kubernetes.io/projected/e08e4d13-48d4-434c-a816-b64d161f09be-kube-api-access-8ln45\") pod \"cinder-db-create-pgdkm\" (UID: \"e08e4d13-48d4-434c-a816-b64d161f09be\") " pod="openstack/cinder-db-create-pgdkm" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.993770 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0033ca5-7b7d-464e-ba26-a59ca8f226fe-operator-scripts\") pod \"barbican-81e2-account-create-update-8bfjh\" (UID: \"c0033ca5-7b7d-464e-ba26-a59ca8f226fe\") " pod="openstack/barbican-81e2-account-create-update-8bfjh" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.993794 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clssl\" (UniqueName: \"kubernetes.io/projected/c0033ca5-7b7d-464e-ba26-a59ca8f226fe-kube-api-access-clssl\") pod \"barbican-81e2-account-create-update-8bfjh\" (UID: \"c0033ca5-7b7d-464e-ba26-a59ca8f226fe\") " pod="openstack/barbican-81e2-account-create-update-8bfjh" Jan 26 13:18:09 crc kubenswrapper[4844]: I0126 13:18:09.994685 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0033ca5-7b7d-464e-ba26-a59ca8f226fe-operator-scripts\") pod \"barbican-81e2-account-create-update-8bfjh\" (UID: \"c0033ca5-7b7d-464e-ba26-a59ca8f226fe\") " pod="openstack/barbican-81e2-account-create-update-8bfjh" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.004914 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-91b1-account-create-update-5b86b"] Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.006034 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-91b1-account-create-update-5b86b" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.013042 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.015200 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clssl\" (UniqueName: \"kubernetes.io/projected/c0033ca5-7b7d-464e-ba26-a59ca8f226fe-kube-api-access-clssl\") pod \"barbican-81e2-account-create-update-8bfjh\" (UID: \"c0033ca5-7b7d-464e-ba26-a59ca8f226fe\") " pod="openstack/barbican-81e2-account-create-update-8bfjh" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.042811 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-91b1-account-create-update-5b86b"] Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.095381 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe5f771-2b02-4d1d-93bb-9e59aa3723ad-operator-scripts\") pod \"cinder-91b1-account-create-update-5b86b\" (UID: \"fbe5f771-2b02-4d1d-93bb-9e59aa3723ad\") " pod="openstack/cinder-91b1-account-create-update-5b86b" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.095438 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ln45\" (UniqueName: \"kubernetes.io/projected/e08e4d13-48d4-434c-a816-b64d161f09be-kube-api-access-8ln45\") pod \"cinder-db-create-pgdkm\" (UID: \"e08e4d13-48d4-434c-a816-b64d161f09be\") " pod="openstack/cinder-db-create-pgdkm" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.095587 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm8m9\" (UniqueName: \"kubernetes.io/projected/fbe5f771-2b02-4d1d-93bb-9e59aa3723ad-kube-api-access-nm8m9\") pod \"cinder-91b1-account-create-update-5b86b\" (UID: \"fbe5f771-2b02-4d1d-93bb-9e59aa3723ad\") " pod="openstack/cinder-91b1-account-create-update-5b86b" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.095829 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e08e4d13-48d4-434c-a816-b64d161f09be-operator-scripts\") pod \"cinder-db-create-pgdkm\" (UID: \"e08e4d13-48d4-434c-a816-b64d161f09be\") " pod="openstack/cinder-db-create-pgdkm" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.097106 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e08e4d13-48d4-434c-a816-b64d161f09be-operator-scripts\") pod \"cinder-db-create-pgdkm\" (UID: \"e08e4d13-48d4-434c-a816-b64d161f09be\") " pod="openstack/cinder-db-create-pgdkm" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.115804 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81e2-account-create-update-8bfjh" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.119664 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ln45\" (UniqueName: \"kubernetes.io/projected/e08e4d13-48d4-434c-a816-b64d161f09be-kube-api-access-8ln45\") pod \"cinder-db-create-pgdkm\" (UID: \"e08e4d13-48d4-434c-a816-b64d161f09be\") " pod="openstack/cinder-db-create-pgdkm" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.161309 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-td22t"] Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.162456 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-td22t" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.164743 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l6kd4" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.165190 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.166117 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.166227 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.174667 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-td22t"] Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.175012 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pgdkm" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.197351 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe5f771-2b02-4d1d-93bb-9e59aa3723ad-operator-scripts\") pod \"cinder-91b1-account-create-update-5b86b\" (UID: \"fbe5f771-2b02-4d1d-93bb-9e59aa3723ad\") " pod="openstack/cinder-91b1-account-create-update-5b86b" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.197809 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm8m9\" (UniqueName: \"kubernetes.io/projected/fbe5f771-2b02-4d1d-93bb-9e59aa3723ad-kube-api-access-nm8m9\") pod \"cinder-91b1-account-create-update-5b86b\" (UID: \"fbe5f771-2b02-4d1d-93bb-9e59aa3723ad\") " pod="openstack/cinder-91b1-account-create-update-5b86b" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.198255 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe5f771-2b02-4d1d-93bb-9e59aa3723ad-operator-scripts\") pod \"cinder-91b1-account-create-update-5b86b\" (UID: \"fbe5f771-2b02-4d1d-93bb-9e59aa3723ad\") " pod="openstack/cinder-91b1-account-create-update-5b86b" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.219194 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm8m9\" (UniqueName: \"kubernetes.io/projected/fbe5f771-2b02-4d1d-93bb-9e59aa3723ad-kube-api-access-nm8m9\") pod \"cinder-91b1-account-create-update-5b86b\" (UID: \"fbe5f771-2b02-4d1d-93bb-9e59aa3723ad\") " pod="openstack/cinder-91b1-account-create-update-5b86b" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.298987 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca9f483-dabf-40a9-be25-312db82ffd23-combined-ca-bundle\") pod \"keystone-db-sync-td22t\" (UID: \"0ca9f483-dabf-40a9-be25-312db82ffd23\") " pod="openstack/keystone-db-sync-td22t" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.299094 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6k75\" (UniqueName: \"kubernetes.io/projected/0ca9f483-dabf-40a9-be25-312db82ffd23-kube-api-access-z6k75\") pod \"keystone-db-sync-td22t\" (UID: \"0ca9f483-dabf-40a9-be25-312db82ffd23\") " pod="openstack/keystone-db-sync-td22t" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.299128 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca9f483-dabf-40a9-be25-312db82ffd23-config-data\") pod \"keystone-db-sync-td22t\" (UID: \"0ca9f483-dabf-40a9-be25-312db82ffd23\") " pod="openstack/keystone-db-sync-td22t" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.379518 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-467jd"] Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.394010 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-91b1-account-create-update-5b86b" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.402673 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6k75\" (UniqueName: \"kubernetes.io/projected/0ca9f483-dabf-40a9-be25-312db82ffd23-kube-api-access-z6k75\") pod \"keystone-db-sync-td22t\" (UID: \"0ca9f483-dabf-40a9-be25-312db82ffd23\") " pod="openstack/keystone-db-sync-td22t" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.402736 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca9f483-dabf-40a9-be25-312db82ffd23-config-data\") pod \"keystone-db-sync-td22t\" (UID: \"0ca9f483-dabf-40a9-be25-312db82ffd23\") " pod="openstack/keystone-db-sync-td22t" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.402810 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca9f483-dabf-40a9-be25-312db82ffd23-combined-ca-bundle\") pod \"keystone-db-sync-td22t\" (UID: \"0ca9f483-dabf-40a9-be25-312db82ffd23\") " pod="openstack/keystone-db-sync-td22t" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.407756 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca9f483-dabf-40a9-be25-312db82ffd23-combined-ca-bundle\") pod \"keystone-db-sync-td22t\" (UID: \"0ca9f483-dabf-40a9-be25-312db82ffd23\") " pod="openstack/keystone-db-sync-td22t" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.408186 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca9f483-dabf-40a9-be25-312db82ffd23-config-data\") pod \"keystone-db-sync-td22t\" (UID: \"0ca9f483-dabf-40a9-be25-312db82ffd23\") " pod="openstack/keystone-db-sync-td22t" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.426252 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6k75\" (UniqueName: \"kubernetes.io/projected/0ca9f483-dabf-40a9-be25-312db82ffd23-kube-api-access-z6k75\") pod \"keystone-db-sync-td22t\" (UID: \"0ca9f483-dabf-40a9-be25-312db82ffd23\") " pod="openstack/keystone-db-sync-td22t" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.503079 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-td22t" Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.605210 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-81e2-account-create-update-8bfjh"] Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.692296 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pgdkm"] Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.804676 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-91b1-account-create-update-5b86b"] Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.967793 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-91b1-account-create-update-5b86b" event={"ID":"fbe5f771-2b02-4d1d-93bb-9e59aa3723ad","Type":"ContainerStarted","Data":"1635e04ff0de1da29a47680f6f4b2d0b54ade1fbc330e2726d6486a458f62e8c"} Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.971379 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aefdcbbc-2ac1-43d5-b70c-26e89000ab98","Type":"ContainerStarted","Data":"16de62b26afafaaee1f6a069b2507522c4143d9fed128422110635385872593f"} Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.971441 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aefdcbbc-2ac1-43d5-b70c-26e89000ab98","Type":"ContainerStarted","Data":"757e7ac121a5ac2d5117a9e4d706d94a8c98cc3ab12ddb64dec2c8c4d9e729fb"} Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.972896 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pgdkm" event={"ID":"e08e4d13-48d4-434c-a816-b64d161f09be","Type":"ContainerStarted","Data":"244af32b07ad4fc4cc3711e459a4ad0aa81e0d19ecf24581873f867194ce37f3"} Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.974714 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-467jd" event={"ID":"c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55","Type":"ContainerStarted","Data":"93948a365f063de896fea97ba0e5d8a70050a44cced46c3ffc82e7d4a783412d"} Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.974759 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-467jd" event={"ID":"c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55","Type":"ContainerStarted","Data":"c7d442738091b778ba50c6a3eb26b7a2db893c63733b9f3633ae9ba11cf06ced"} Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.976139 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-81e2-account-create-update-8bfjh" event={"ID":"c0033ca5-7b7d-464e-ba26-a59ca8f226fe","Type":"ContainerStarted","Data":"32dcbb0c5d0ec630a857a852e8c41f505f3ffbfb3033a0261aec48207394718c"} Jan 26 13:18:10 crc kubenswrapper[4844]: I0126 13:18:10.976168 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-81e2-account-create-update-8bfjh" event={"ID":"c0033ca5-7b7d-464e-ba26-a59ca8f226fe","Type":"ContainerStarted","Data":"29e057467872adbb24a3226ba8058f7afebf8adabb9dde754e31a071521c0f93"} Jan 26 13:18:11 crc kubenswrapper[4844]: I0126 13:18:11.013246 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.013228351 podStartE2EDuration="16.013228351s" podCreationTimestamp="2026-01-26 13:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:18:11.008526789 +0000 UTC m=+2067.941894401" watchObservedRunningTime="2026-01-26 13:18:11.013228351 +0000 UTC m=+2067.946595963" Jan 26 13:18:11 crc kubenswrapper[4844]: I0126 13:18:11.035991 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-467jd" podStartSLOduration=2.035970038 podStartE2EDuration="2.035970038s" podCreationTimestamp="2026-01-26 13:18:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:18:11.025127797 +0000 UTC m=+2067.958495409" watchObservedRunningTime="2026-01-26 13:18:11.035970038 +0000 UTC m=+2067.969337650" Jan 26 13:18:11 crc kubenswrapper[4844]: I0126 13:18:11.049155 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-81e2-account-create-update-8bfjh" podStartSLOduration=2.049136073 podStartE2EDuration="2.049136073s" podCreationTimestamp="2026-01-26 13:18:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:18:11.04233283 +0000 UTC m=+2067.975700442" watchObservedRunningTime="2026-01-26 13:18:11.049136073 +0000 UTC m=+2067.982503685" Jan 26 13:18:11 crc kubenswrapper[4844]: I0126 13:18:11.108906 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-td22t"] Jan 26 13:18:11 crc kubenswrapper[4844]: W0126 13:18:11.111792 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ca9f483_dabf_40a9_be25_312db82ffd23.slice/crio-b7a3c93966632e1570f2c1effbe096e1861f78871879ad6fa694256db0228608 WatchSource:0}: Error finding container b7a3c93966632e1570f2c1effbe096e1861f78871879ad6fa694256db0228608: Status 404 returned error can't find the container with id b7a3c93966632e1570f2c1effbe096e1861f78871879ad6fa694256db0228608 Jan 26 13:18:11 crc kubenswrapper[4844]: I0126 13:18:11.179318 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 13:18:11 crc kubenswrapper[4844]: I0126 13:18:11.179366 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 13:18:11 crc kubenswrapper[4844]: I0126 13:18:11.187467 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:11.985405 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-91b1-account-create-update-5b86b" event={"ID":"fbe5f771-2b02-4d1d-93bb-9e59aa3723ad","Type":"ContainerStarted","Data":"6edfe5ab404bbe2c7e6e6c5bf1ae4235bf4c7059fc21e85b47e5e94d611ba096"} Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:11.987924 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-td22t" event={"ID":"0ca9f483-dabf-40a9-be25-312db82ffd23","Type":"ContainerStarted","Data":"b7a3c93966632e1570f2c1effbe096e1861f78871879ad6fa694256db0228608"} Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:11.990717 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pgdkm" event={"ID":"e08e4d13-48d4-434c-a816-b64d161f09be","Type":"ContainerStarted","Data":"f8a1ef6b46b0ad8c3cee0ccb59b771e7bce23387e86d33395e4ff38a1b5c67aa"} Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.003656 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.036774 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-pgdkm" podStartSLOduration=3.036760207 podStartE2EDuration="3.036760207s" podCreationTimestamp="2026-01-26 13:18:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:18:12.036631444 +0000 UTC m=+2068.969999056" watchObservedRunningTime="2026-01-26 13:18:12.036760207 +0000 UTC m=+2068.970127819" Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.036849 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-91b1-account-create-update-5b86b" podStartSLOduration=3.03684449 podStartE2EDuration="3.03684449s" podCreationTimestamp="2026-01-26 13:18:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:18:12.016539803 +0000 UTC m=+2068.949907415" watchObservedRunningTime="2026-01-26 13:18:12.03684449 +0000 UTC m=+2068.970212102" Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.770345 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-7nthx"] Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.771426 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7nthx" Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.781866 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7nthx"] Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.830106 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-5w9q7"] Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.832318 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-5w9q7" Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.835902 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.836081 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-gbbb6" Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.854282 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-5w9q7"] Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.891892 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e0ab-account-create-update-d7qtp"] Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.893124 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e0ab-account-create-update-d7qtp" Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.910036 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.911447 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26c50a55-5ec7-41d8-a69a-607f0331039a-operator-scripts\") pod \"glance-db-create-7nthx\" (UID: \"26c50a55-5ec7-41d8-a69a-607f0331039a\") " pod="openstack/glance-db-create-7nthx" Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.911568 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg9zw\" (UniqueName: \"kubernetes.io/projected/26c50a55-5ec7-41d8-a69a-607f0331039a-kube-api-access-mg9zw\") pod \"glance-db-create-7nthx\" (UID: \"26c50a55-5ec7-41d8-a69a-607f0331039a\") " pod="openstack/glance-db-create-7nthx" Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.935713 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e0ab-account-create-update-d7qtp"] Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.976183 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-v22sn"] Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.977564 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-v22sn" Jan 26 13:18:12 crc kubenswrapper[4844]: I0126 13:18:12.994366 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-v22sn"] Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.005518 4844 generic.go:334] "Generic (PLEG): container finished" podID="c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55" containerID="93948a365f063de896fea97ba0e5d8a70050a44cced46c3ffc82e7d4a783412d" exitCode=0 Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.006045 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-467jd" event={"ID":"c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55","Type":"ContainerDied","Data":"93948a365f063de896fea97ba0e5d8a70050a44cced46c3ffc82e7d4a783412d"} Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.017571 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2654c2cc-3479-4c0c-89e3-26ecfeedb613-operator-scripts\") pod \"glance-e0ab-account-create-update-d7qtp\" (UID: \"2654c2cc-3479-4c0c-89e3-26ecfeedb613\") " pod="openstack/glance-e0ab-account-create-update-d7qtp" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.017640 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db436f05-9b6d-4342-82d0-524c18fe6079-combined-ca-bundle\") pod \"watcher-db-sync-5w9q7\" (UID: \"db436f05-9b6d-4342-82d0-524c18fe6079\") " pod="openstack/watcher-db-sync-5w9q7" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.017673 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkdsv\" (UniqueName: \"kubernetes.io/projected/db436f05-9b6d-4342-82d0-524c18fe6079-kube-api-access-nkdsv\") pod \"watcher-db-sync-5w9q7\" (UID: \"db436f05-9b6d-4342-82d0-524c18fe6079\") " pod="openstack/watcher-db-sync-5w9q7" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.017735 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db436f05-9b6d-4342-82d0-524c18fe6079-config-data\") pod \"watcher-db-sync-5w9q7\" (UID: \"db436f05-9b6d-4342-82d0-524c18fe6079\") " pod="openstack/watcher-db-sync-5w9q7" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.017765 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26c50a55-5ec7-41d8-a69a-607f0331039a-operator-scripts\") pod \"glance-db-create-7nthx\" (UID: \"26c50a55-5ec7-41d8-a69a-607f0331039a\") " pod="openstack/glance-db-create-7nthx" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.017812 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db436f05-9b6d-4342-82d0-524c18fe6079-db-sync-config-data\") pod \"watcher-db-sync-5w9q7\" (UID: \"db436f05-9b6d-4342-82d0-524c18fe6079\") " pod="openstack/watcher-db-sync-5w9q7" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.017887 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg9zw\" (UniqueName: \"kubernetes.io/projected/26c50a55-5ec7-41d8-a69a-607f0331039a-kube-api-access-mg9zw\") pod \"glance-db-create-7nthx\" (UID: \"26c50a55-5ec7-41d8-a69a-607f0331039a\") " pod="openstack/glance-db-create-7nthx" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.017916 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n485\" (UniqueName: \"kubernetes.io/projected/2654c2cc-3479-4c0c-89e3-26ecfeedb613-kube-api-access-4n485\") pod \"glance-e0ab-account-create-update-d7qtp\" (UID: \"2654c2cc-3479-4c0c-89e3-26ecfeedb613\") " pod="openstack/glance-e0ab-account-create-update-d7qtp" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.018726 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26c50a55-5ec7-41d8-a69a-607f0331039a-operator-scripts\") pod \"glance-db-create-7nthx\" (UID: \"26c50a55-5ec7-41d8-a69a-607f0331039a\") " pod="openstack/glance-db-create-7nthx" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.044453 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg9zw\" (UniqueName: \"kubernetes.io/projected/26c50a55-5ec7-41d8-a69a-607f0331039a-kube-api-access-mg9zw\") pod \"glance-db-create-7nthx\" (UID: \"26c50a55-5ec7-41d8-a69a-607f0331039a\") " pod="openstack/glance-db-create-7nthx" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.086579 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-4339-account-create-update-lgkll"] Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.087752 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4339-account-create-update-lgkll" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.088679 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7nthx" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.089345 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.094925 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4339-account-create-update-lgkll"] Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.119106 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkdsv\" (UniqueName: \"kubernetes.io/projected/db436f05-9b6d-4342-82d0-524c18fe6079-kube-api-access-nkdsv\") pod \"watcher-db-sync-5w9q7\" (UID: \"db436f05-9b6d-4342-82d0-524c18fe6079\") " pod="openstack/watcher-db-sync-5w9q7" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.119192 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db436f05-9b6d-4342-82d0-524c18fe6079-config-data\") pod \"watcher-db-sync-5w9q7\" (UID: \"db436f05-9b6d-4342-82d0-524c18fe6079\") " pod="openstack/watcher-db-sync-5w9q7" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.119217 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7ksj\" (UniqueName: \"kubernetes.io/projected/e2eae26a-a2cb-4a25-b77c-021951cf33b3-kube-api-access-k7ksj\") pod \"neutron-db-create-v22sn\" (UID: \"e2eae26a-a2cb-4a25-b77c-021951cf33b3\") " pod="openstack/neutron-db-create-v22sn" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.119312 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db436f05-9b6d-4342-82d0-524c18fe6079-db-sync-config-data\") pod \"watcher-db-sync-5w9q7\" (UID: \"db436f05-9b6d-4342-82d0-524c18fe6079\") " pod="openstack/watcher-db-sync-5w9q7" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.119387 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2eae26a-a2cb-4a25-b77c-021951cf33b3-operator-scripts\") pod \"neutron-db-create-v22sn\" (UID: \"e2eae26a-a2cb-4a25-b77c-021951cf33b3\") " pod="openstack/neutron-db-create-v22sn" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.119448 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n485\" (UniqueName: \"kubernetes.io/projected/2654c2cc-3479-4c0c-89e3-26ecfeedb613-kube-api-access-4n485\") pod \"glance-e0ab-account-create-update-d7qtp\" (UID: \"2654c2cc-3479-4c0c-89e3-26ecfeedb613\") " pod="openstack/glance-e0ab-account-create-update-d7qtp" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.119495 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2654c2cc-3479-4c0c-89e3-26ecfeedb613-operator-scripts\") pod \"glance-e0ab-account-create-update-d7qtp\" (UID: \"2654c2cc-3479-4c0c-89e3-26ecfeedb613\") " pod="openstack/glance-e0ab-account-create-update-d7qtp" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.119513 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db436f05-9b6d-4342-82d0-524c18fe6079-combined-ca-bundle\") pod \"watcher-db-sync-5w9q7\" (UID: \"db436f05-9b6d-4342-82d0-524c18fe6079\") " pod="openstack/watcher-db-sync-5w9q7" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.123900 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db436f05-9b6d-4342-82d0-524c18fe6079-config-data\") pod \"watcher-db-sync-5w9q7\" (UID: \"db436f05-9b6d-4342-82d0-524c18fe6079\") " pod="openstack/watcher-db-sync-5w9q7" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.123940 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db436f05-9b6d-4342-82d0-524c18fe6079-combined-ca-bundle\") pod \"watcher-db-sync-5w9q7\" (UID: \"db436f05-9b6d-4342-82d0-524c18fe6079\") " pod="openstack/watcher-db-sync-5w9q7" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.125100 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2654c2cc-3479-4c0c-89e3-26ecfeedb613-operator-scripts\") pod \"glance-e0ab-account-create-update-d7qtp\" (UID: \"2654c2cc-3479-4c0c-89e3-26ecfeedb613\") " pod="openstack/glance-e0ab-account-create-update-d7qtp" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.135706 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db436f05-9b6d-4342-82d0-524c18fe6079-db-sync-config-data\") pod \"watcher-db-sync-5w9q7\" (UID: \"db436f05-9b6d-4342-82d0-524c18fe6079\") " pod="openstack/watcher-db-sync-5w9q7" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.141466 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkdsv\" (UniqueName: \"kubernetes.io/projected/db436f05-9b6d-4342-82d0-524c18fe6079-kube-api-access-nkdsv\") pod \"watcher-db-sync-5w9q7\" (UID: \"db436f05-9b6d-4342-82d0-524c18fe6079\") " pod="openstack/watcher-db-sync-5w9q7" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.143685 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n485\" (UniqueName: \"kubernetes.io/projected/2654c2cc-3479-4c0c-89e3-26ecfeedb613-kube-api-access-4n485\") pod \"glance-e0ab-account-create-update-d7qtp\" (UID: \"2654c2cc-3479-4c0c-89e3-26ecfeedb613\") " pod="openstack/glance-e0ab-account-create-update-d7qtp" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.175261 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-5w9q7" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.223226 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2eae26a-a2cb-4a25-b77c-021951cf33b3-operator-scripts\") pod \"neutron-db-create-v22sn\" (UID: \"e2eae26a-a2cb-4a25-b77c-021951cf33b3\") " pod="openstack/neutron-db-create-v22sn" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.223558 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lc5m\" (UniqueName: \"kubernetes.io/projected/021fb8fd-810b-4042-adfd-6ce50bcacbf0-kube-api-access-5lc5m\") pod \"neutron-4339-account-create-update-lgkll\" (UID: \"021fb8fd-810b-4042-adfd-6ce50bcacbf0\") " pod="openstack/neutron-4339-account-create-update-lgkll" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.223612 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7ksj\" (UniqueName: \"kubernetes.io/projected/e2eae26a-a2cb-4a25-b77c-021951cf33b3-kube-api-access-k7ksj\") pod \"neutron-db-create-v22sn\" (UID: \"e2eae26a-a2cb-4a25-b77c-021951cf33b3\") " pod="openstack/neutron-db-create-v22sn" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.223635 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/021fb8fd-810b-4042-adfd-6ce50bcacbf0-operator-scripts\") pod \"neutron-4339-account-create-update-lgkll\" (UID: \"021fb8fd-810b-4042-adfd-6ce50bcacbf0\") " pod="openstack/neutron-4339-account-create-update-lgkll" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.224167 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2eae26a-a2cb-4a25-b77c-021951cf33b3-operator-scripts\") pod \"neutron-db-create-v22sn\" (UID: \"e2eae26a-a2cb-4a25-b77c-021951cf33b3\") " pod="openstack/neutron-db-create-v22sn" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.228443 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e0ab-account-create-update-d7qtp" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.240045 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7ksj\" (UniqueName: \"kubernetes.io/projected/e2eae26a-a2cb-4a25-b77c-021951cf33b3-kube-api-access-k7ksj\") pod \"neutron-db-create-v22sn\" (UID: \"e2eae26a-a2cb-4a25-b77c-021951cf33b3\") " pod="openstack/neutron-db-create-v22sn" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.299709 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-v22sn" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.327097 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/021fb8fd-810b-4042-adfd-6ce50bcacbf0-operator-scripts\") pod \"neutron-4339-account-create-update-lgkll\" (UID: \"021fb8fd-810b-4042-adfd-6ce50bcacbf0\") " pod="openstack/neutron-4339-account-create-update-lgkll" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.327244 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lc5m\" (UniqueName: \"kubernetes.io/projected/021fb8fd-810b-4042-adfd-6ce50bcacbf0-kube-api-access-5lc5m\") pod \"neutron-4339-account-create-update-lgkll\" (UID: \"021fb8fd-810b-4042-adfd-6ce50bcacbf0\") " pod="openstack/neutron-4339-account-create-update-lgkll" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.328172 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/021fb8fd-810b-4042-adfd-6ce50bcacbf0-operator-scripts\") pod \"neutron-4339-account-create-update-lgkll\" (UID: \"021fb8fd-810b-4042-adfd-6ce50bcacbf0\") " pod="openstack/neutron-4339-account-create-update-lgkll" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.350867 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lc5m\" (UniqueName: \"kubernetes.io/projected/021fb8fd-810b-4042-adfd-6ce50bcacbf0-kube-api-access-5lc5m\") pod \"neutron-4339-account-create-update-lgkll\" (UID: \"021fb8fd-810b-4042-adfd-6ce50bcacbf0\") " pod="openstack/neutron-4339-account-create-update-lgkll" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.384269 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7nthx"] Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.634811 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4339-account-create-update-lgkll" Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.685505 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e0ab-account-create-update-d7qtp"] Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.769871 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-5w9q7"] Jan 26 13:18:13 crc kubenswrapper[4844]: I0126 13:18:13.923201 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-v22sn"] Jan 26 13:18:14 crc kubenswrapper[4844]: E0126 13:18:14.014770 4844 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.142:49650->38.102.83.142:35401: write tcp 38.102.83.142:49650->38.102.83.142:35401: write: broken pipe Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.019897 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-5w9q7" event={"ID":"db436f05-9b6d-4342-82d0-524c18fe6079","Type":"ContainerStarted","Data":"708cb1ab377da8806c4a729c7906a563bc846e0c5169aed8f6891cec2ccaada2"} Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.027926 4844 generic.go:334] "Generic (PLEG): container finished" podID="e08e4d13-48d4-434c-a816-b64d161f09be" containerID="f8a1ef6b46b0ad8c3cee0ccb59b771e7bce23387e86d33395e4ff38a1b5c67aa" exitCode=0 Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.027985 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pgdkm" event={"ID":"e08e4d13-48d4-434c-a816-b64d161f09be","Type":"ContainerDied","Data":"f8a1ef6b46b0ad8c3cee0ccb59b771e7bce23387e86d33395e4ff38a1b5c67aa"} Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.036331 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-v22sn" event={"ID":"e2eae26a-a2cb-4a25-b77c-021951cf33b3","Type":"ContainerStarted","Data":"2d1b5b28b37859846b8ebf1131fa208d2ffc855438322d9dd6bb9c6149d316b6"} Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.058946 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7nthx" event={"ID":"26c50a55-5ec7-41d8-a69a-607f0331039a","Type":"ContainerStarted","Data":"6017853340d8abcef8840a5f2b6a4e39e10b2e8269431cf428633fe0ebeb52f6"} Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.063118 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7nthx" event={"ID":"26c50a55-5ec7-41d8-a69a-607f0331039a","Type":"ContainerStarted","Data":"05342b1c2c055bcf00623fb55fab4d71cb16e9fd8ccb099d0682fb3e455e4742"} Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.073337 4844 generic.go:334] "Generic (PLEG): container finished" podID="c0033ca5-7b7d-464e-ba26-a59ca8f226fe" containerID="32dcbb0c5d0ec630a857a852e8c41f505f3ffbfb3033a0261aec48207394718c" exitCode=0 Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.073416 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-81e2-account-create-update-8bfjh" event={"ID":"c0033ca5-7b7d-464e-ba26-a59ca8f226fe","Type":"ContainerDied","Data":"32dcbb0c5d0ec630a857a852e8c41f505f3ffbfb3033a0261aec48207394718c"} Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.076222 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e0ab-account-create-update-d7qtp" event={"ID":"2654c2cc-3479-4c0c-89e3-26ecfeedb613","Type":"ContainerStarted","Data":"284d132795c468d96b362fb5e87efe1c64b1f7c4020b2ee50a2f4545f7862208"} Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.076258 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e0ab-account-create-update-d7qtp" event={"ID":"2654c2cc-3479-4c0c-89e3-26ecfeedb613","Type":"ContainerStarted","Data":"a5abfe1a987c6d46b3ca742ce5491558b14375cc3ff1d2dcec7b12592b4e5d9f"} Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.080412 4844 generic.go:334] "Generic (PLEG): container finished" podID="fbe5f771-2b02-4d1d-93bb-9e59aa3723ad" containerID="6edfe5ab404bbe2c7e6e6c5bf1ae4235bf4c7059fc21e85b47e5e94d611ba096" exitCode=0 Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.080464 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-91b1-account-create-update-5b86b" event={"ID":"fbe5f771-2b02-4d1d-93bb-9e59aa3723ad","Type":"ContainerDied","Data":"6edfe5ab404bbe2c7e6e6c5bf1ae4235bf4c7059fc21e85b47e5e94d611ba096"} Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.109208 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-7nthx" podStartSLOduration=2.109183269 podStartE2EDuration="2.109183269s" podCreationTimestamp="2026-01-26 13:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:18:14.073017341 +0000 UTC m=+2071.006384953" watchObservedRunningTime="2026-01-26 13:18:14.109183269 +0000 UTC m=+2071.042550901" Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.126907 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e0ab-account-create-update-d7qtp" podStartSLOduration=2.126889013 podStartE2EDuration="2.126889013s" podCreationTimestamp="2026-01-26 13:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:18:14.108790479 +0000 UTC m=+2071.042158091" watchObservedRunningTime="2026-01-26 13:18:14.126889013 +0000 UTC m=+2071.060256615" Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.361113 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4339-account-create-update-lgkll"] Jan 26 13:18:14 crc kubenswrapper[4844]: W0126 13:18:14.369317 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod021fb8fd_810b_4042_adfd_6ce50bcacbf0.slice/crio-d40d52842b68712b7bbbbf8e106e9ded2cd099278ccc00374fb7337dfc00b061 WatchSource:0}: Error finding container d40d52842b68712b7bbbbf8e106e9ded2cd099278ccc00374fb7337dfc00b061: Status 404 returned error can't find the container with id d40d52842b68712b7bbbbf8e106e9ded2cd099278ccc00374fb7337dfc00b061 Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.584915 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-467jd" Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.714535 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnc6h\" (UniqueName: \"kubernetes.io/projected/c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55-kube-api-access-mnc6h\") pod \"c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55\" (UID: \"c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55\") " Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.714584 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55-operator-scripts\") pod \"c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55\" (UID: \"c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55\") " Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.715705 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55" (UID: "c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.734845 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55-kube-api-access-mnc6h" (OuterVolumeSpecName: "kube-api-access-mnc6h") pod "c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55" (UID: "c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55"). InnerVolumeSpecName "kube-api-access-mnc6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.817229 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:14 crc kubenswrapper[4844]: I0126 13:18:14.817272 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnc6h\" (UniqueName: \"kubernetes.io/projected/c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55-kube-api-access-mnc6h\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:15 crc kubenswrapper[4844]: I0126 13:18:15.116457 4844 generic.go:334] "Generic (PLEG): container finished" podID="2654c2cc-3479-4c0c-89e3-26ecfeedb613" containerID="284d132795c468d96b362fb5e87efe1c64b1f7c4020b2ee50a2f4545f7862208" exitCode=0 Jan 26 13:18:15 crc kubenswrapper[4844]: I0126 13:18:15.116521 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e0ab-account-create-update-d7qtp" event={"ID":"2654c2cc-3479-4c0c-89e3-26ecfeedb613","Type":"ContainerDied","Data":"284d132795c468d96b362fb5e87efe1c64b1f7c4020b2ee50a2f4545f7862208"} Jan 26 13:18:15 crc kubenswrapper[4844]: I0126 13:18:15.122330 4844 generic.go:334] "Generic (PLEG): container finished" podID="021fb8fd-810b-4042-adfd-6ce50bcacbf0" containerID="de8d0d169b4d697da01169da87bb3a5a63f75c2051cadb542947fec8e02cbfa5" exitCode=0 Jan 26 13:18:15 crc kubenswrapper[4844]: I0126 13:18:15.122421 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4339-account-create-update-lgkll" event={"ID":"021fb8fd-810b-4042-adfd-6ce50bcacbf0","Type":"ContainerDied","Data":"de8d0d169b4d697da01169da87bb3a5a63f75c2051cadb542947fec8e02cbfa5"} Jan 26 13:18:15 crc kubenswrapper[4844]: I0126 13:18:15.122449 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4339-account-create-update-lgkll" event={"ID":"021fb8fd-810b-4042-adfd-6ce50bcacbf0","Type":"ContainerStarted","Data":"d40d52842b68712b7bbbbf8e106e9ded2cd099278ccc00374fb7337dfc00b061"} Jan 26 13:18:15 crc kubenswrapper[4844]: I0126 13:18:15.124197 4844 generic.go:334] "Generic (PLEG): container finished" podID="e2eae26a-a2cb-4a25-b77c-021951cf33b3" containerID="0e7de2ceafa9c3a048c8d86a9129c054789c053516a3573e6315e0e7e971482e" exitCode=0 Jan 26 13:18:15 crc kubenswrapper[4844]: I0126 13:18:15.124356 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-v22sn" event={"ID":"e2eae26a-a2cb-4a25-b77c-021951cf33b3","Type":"ContainerDied","Data":"0e7de2ceafa9c3a048c8d86a9129c054789c053516a3573e6315e0e7e971482e"} Jan 26 13:18:15 crc kubenswrapper[4844]: I0126 13:18:15.126493 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-467jd" event={"ID":"c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55","Type":"ContainerDied","Data":"c7d442738091b778ba50c6a3eb26b7a2db893c63733b9f3633ae9ba11cf06ced"} Jan 26 13:18:15 crc kubenswrapper[4844]: I0126 13:18:15.126533 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7d442738091b778ba50c6a3eb26b7a2db893c63733b9f3633ae9ba11cf06ced" Jan 26 13:18:15 crc kubenswrapper[4844]: I0126 13:18:15.126510 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-467jd" Jan 26 13:18:15 crc kubenswrapper[4844]: I0126 13:18:15.136305 4844 generic.go:334] "Generic (PLEG): container finished" podID="26c50a55-5ec7-41d8-a69a-607f0331039a" containerID="6017853340d8abcef8840a5f2b6a4e39e10b2e8269431cf428633fe0ebeb52f6" exitCode=0 Jan 26 13:18:15 crc kubenswrapper[4844]: I0126 13:18:15.136433 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7nthx" event={"ID":"26c50a55-5ec7-41d8-a69a-607f0331039a","Type":"ContainerDied","Data":"6017853340d8abcef8840a5f2b6a4e39e10b2e8269431cf428633fe0ebeb52f6"} Jan 26 13:18:17 crc kubenswrapper[4844]: I0126 13:18:17.956395 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7nthx" Jan 26 13:18:17 crc kubenswrapper[4844]: I0126 13:18:17.963274 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-91b1-account-create-update-5b86b" Jan 26 13:18:17 crc kubenswrapper[4844]: I0126 13:18:17.969356 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81e2-account-create-update-8bfjh" Jan 26 13:18:17 crc kubenswrapper[4844]: I0126 13:18:17.988850 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pgdkm" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.008859 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4339-account-create-update-lgkll" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.022720 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e0ab-account-create-update-d7qtp" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.045108 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-v22sn" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.072752 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ln45\" (UniqueName: \"kubernetes.io/projected/e08e4d13-48d4-434c-a816-b64d161f09be-kube-api-access-8ln45\") pod \"e08e4d13-48d4-434c-a816-b64d161f09be\" (UID: \"e08e4d13-48d4-434c-a816-b64d161f09be\") " Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.073872 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg9zw\" (UniqueName: \"kubernetes.io/projected/26c50a55-5ec7-41d8-a69a-607f0331039a-kube-api-access-mg9zw\") pod \"26c50a55-5ec7-41d8-a69a-607f0331039a\" (UID: \"26c50a55-5ec7-41d8-a69a-607f0331039a\") " Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.073920 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0033ca5-7b7d-464e-ba26-a59ca8f226fe-operator-scripts\") pod \"c0033ca5-7b7d-464e-ba26-a59ca8f226fe\" (UID: \"c0033ca5-7b7d-464e-ba26-a59ca8f226fe\") " Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.073948 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e08e4d13-48d4-434c-a816-b64d161f09be-operator-scripts\") pod \"e08e4d13-48d4-434c-a816-b64d161f09be\" (UID: \"e08e4d13-48d4-434c-a816-b64d161f09be\") " Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.073988 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm8m9\" (UniqueName: \"kubernetes.io/projected/fbe5f771-2b02-4d1d-93bb-9e59aa3723ad-kube-api-access-nm8m9\") pod \"fbe5f771-2b02-4d1d-93bb-9e59aa3723ad\" (UID: \"fbe5f771-2b02-4d1d-93bb-9e59aa3723ad\") " Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.074103 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clssl\" (UniqueName: \"kubernetes.io/projected/c0033ca5-7b7d-464e-ba26-a59ca8f226fe-kube-api-access-clssl\") pod \"c0033ca5-7b7d-464e-ba26-a59ca8f226fe\" (UID: \"c0033ca5-7b7d-464e-ba26-a59ca8f226fe\") " Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.074139 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe5f771-2b02-4d1d-93bb-9e59aa3723ad-operator-scripts\") pod \"fbe5f771-2b02-4d1d-93bb-9e59aa3723ad\" (UID: \"fbe5f771-2b02-4d1d-93bb-9e59aa3723ad\") " Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.074166 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26c50a55-5ec7-41d8-a69a-607f0331039a-operator-scripts\") pod \"26c50a55-5ec7-41d8-a69a-607f0331039a\" (UID: \"26c50a55-5ec7-41d8-a69a-607f0331039a\") " Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.081063 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e08e4d13-48d4-434c-a816-b64d161f09be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e08e4d13-48d4-434c-a816-b64d161f09be" (UID: "e08e4d13-48d4-434c-a816-b64d161f09be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.081426 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e08e4d13-48d4-434c-a816-b64d161f09be-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.081867 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0033ca5-7b7d-464e-ba26-a59ca8f226fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0033ca5-7b7d-464e-ba26-a59ca8f226fe" (UID: "c0033ca5-7b7d-464e-ba26-a59ca8f226fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.083251 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe5f771-2b02-4d1d-93bb-9e59aa3723ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fbe5f771-2b02-4d1d-93bb-9e59aa3723ad" (UID: "fbe5f771-2b02-4d1d-93bb-9e59aa3723ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.083921 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26c50a55-5ec7-41d8-a69a-607f0331039a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26c50a55-5ec7-41d8-a69a-607f0331039a" (UID: "26c50a55-5ec7-41d8-a69a-607f0331039a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.084235 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e08e4d13-48d4-434c-a816-b64d161f09be-kube-api-access-8ln45" (OuterVolumeSpecName: "kube-api-access-8ln45") pod "e08e4d13-48d4-434c-a816-b64d161f09be" (UID: "e08e4d13-48d4-434c-a816-b64d161f09be"). InnerVolumeSpecName "kube-api-access-8ln45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.085624 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbe5f771-2b02-4d1d-93bb-9e59aa3723ad-kube-api-access-nm8m9" (OuterVolumeSpecName: "kube-api-access-nm8m9") pod "fbe5f771-2b02-4d1d-93bb-9e59aa3723ad" (UID: "fbe5f771-2b02-4d1d-93bb-9e59aa3723ad"). InnerVolumeSpecName "kube-api-access-nm8m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.086665 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0033ca5-7b7d-464e-ba26-a59ca8f226fe-kube-api-access-clssl" (OuterVolumeSpecName: "kube-api-access-clssl") pod "c0033ca5-7b7d-464e-ba26-a59ca8f226fe" (UID: "c0033ca5-7b7d-464e-ba26-a59ca8f226fe"). InnerVolumeSpecName "kube-api-access-clssl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.089004 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26c50a55-5ec7-41d8-a69a-607f0331039a-kube-api-access-mg9zw" (OuterVolumeSpecName: "kube-api-access-mg9zw") pod "26c50a55-5ec7-41d8-a69a-607f0331039a" (UID: "26c50a55-5ec7-41d8-a69a-607f0331039a"). InnerVolumeSpecName "kube-api-access-mg9zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.166280 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-81e2-account-create-update-8bfjh" event={"ID":"c0033ca5-7b7d-464e-ba26-a59ca8f226fe","Type":"ContainerDied","Data":"29e057467872adbb24a3226ba8058f7afebf8adabb9dde754e31a071521c0f93"} Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.166366 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29e057467872adbb24a3226ba8058f7afebf8adabb9dde754e31a071521c0f93" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.166327 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81e2-account-create-update-8bfjh" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.167884 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e0ab-account-create-update-d7qtp" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.167909 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e0ab-account-create-update-d7qtp" event={"ID":"2654c2cc-3479-4c0c-89e3-26ecfeedb613","Type":"ContainerDied","Data":"a5abfe1a987c6d46b3ca742ce5491558b14375cc3ff1d2dcec7b12592b4e5d9f"} Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.167935 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5abfe1a987c6d46b3ca742ce5491558b14375cc3ff1d2dcec7b12592b4e5d9f" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.169623 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-91b1-account-create-update-5b86b" event={"ID":"fbe5f771-2b02-4d1d-93bb-9e59aa3723ad","Type":"ContainerDied","Data":"1635e04ff0de1da29a47680f6f4b2d0b54ade1fbc330e2726d6486a458f62e8c"} Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.169661 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1635e04ff0de1da29a47680f6f4b2d0b54ade1fbc330e2726d6486a458f62e8c" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.169683 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-91b1-account-create-update-5b86b" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.171022 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4339-account-create-update-lgkll" event={"ID":"021fb8fd-810b-4042-adfd-6ce50bcacbf0","Type":"ContainerDied","Data":"d40d52842b68712b7bbbbf8e106e9ded2cd099278ccc00374fb7337dfc00b061"} Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.171053 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d40d52842b68712b7bbbbf8e106e9ded2cd099278ccc00374fb7337dfc00b061" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.171028 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4339-account-create-update-lgkll" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.173388 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pgdkm" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.173460 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pgdkm" event={"ID":"e08e4d13-48d4-434c-a816-b64d161f09be","Type":"ContainerDied","Data":"244af32b07ad4fc4cc3711e459a4ad0aa81e0d19ecf24581873f867194ce37f3"} Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.173515 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="244af32b07ad4fc4cc3711e459a4ad0aa81e0d19ecf24581873f867194ce37f3" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.175116 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-v22sn" event={"ID":"e2eae26a-a2cb-4a25-b77c-021951cf33b3","Type":"ContainerDied","Data":"2d1b5b28b37859846b8ebf1131fa208d2ffc855438322d9dd6bb9c6149d316b6"} Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.175144 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d1b5b28b37859846b8ebf1131fa208d2ffc855438322d9dd6bb9c6149d316b6" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.175199 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-v22sn" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.176804 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7nthx" event={"ID":"26c50a55-5ec7-41d8-a69a-607f0331039a","Type":"ContainerDied","Data":"05342b1c2c055bcf00623fb55fab4d71cb16e9fd8ccb099d0682fb3e455e4742"} Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.176835 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05342b1c2c055bcf00623fb55fab4d71cb16e9fd8ccb099d0682fb3e455e4742" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.176887 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7nthx" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.182006 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7ksj\" (UniqueName: \"kubernetes.io/projected/e2eae26a-a2cb-4a25-b77c-021951cf33b3-kube-api-access-k7ksj\") pod \"e2eae26a-a2cb-4a25-b77c-021951cf33b3\" (UID: \"e2eae26a-a2cb-4a25-b77c-021951cf33b3\") " Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.182130 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2eae26a-a2cb-4a25-b77c-021951cf33b3-operator-scripts\") pod \"e2eae26a-a2cb-4a25-b77c-021951cf33b3\" (UID: \"e2eae26a-a2cb-4a25-b77c-021951cf33b3\") " Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.182161 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n485\" (UniqueName: \"kubernetes.io/projected/2654c2cc-3479-4c0c-89e3-26ecfeedb613-kube-api-access-4n485\") pod \"2654c2cc-3479-4c0c-89e3-26ecfeedb613\" (UID: \"2654c2cc-3479-4c0c-89e3-26ecfeedb613\") " Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.182217 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/021fb8fd-810b-4042-adfd-6ce50bcacbf0-operator-scripts\") pod \"021fb8fd-810b-4042-adfd-6ce50bcacbf0\" (UID: \"021fb8fd-810b-4042-adfd-6ce50bcacbf0\") " Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.182280 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2654c2cc-3479-4c0c-89e3-26ecfeedb613-operator-scripts\") pod \"2654c2cc-3479-4c0c-89e3-26ecfeedb613\" (UID: \"2654c2cc-3479-4c0c-89e3-26ecfeedb613\") " Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.182324 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lc5m\" (UniqueName: \"kubernetes.io/projected/021fb8fd-810b-4042-adfd-6ce50bcacbf0-kube-api-access-5lc5m\") pod \"021fb8fd-810b-4042-adfd-6ce50bcacbf0\" (UID: \"021fb8fd-810b-4042-adfd-6ce50bcacbf0\") " Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.182692 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clssl\" (UniqueName: \"kubernetes.io/projected/c0033ca5-7b7d-464e-ba26-a59ca8f226fe-kube-api-access-clssl\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.182712 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe5f771-2b02-4d1d-93bb-9e59aa3723ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.182721 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26c50a55-5ec7-41d8-a69a-607f0331039a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.182731 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ln45\" (UniqueName: \"kubernetes.io/projected/e08e4d13-48d4-434c-a816-b64d161f09be-kube-api-access-8ln45\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.182740 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg9zw\" (UniqueName: \"kubernetes.io/projected/26c50a55-5ec7-41d8-a69a-607f0331039a-kube-api-access-mg9zw\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.182750 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0033ca5-7b7d-464e-ba26-a59ca8f226fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.182758 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm8m9\" (UniqueName: \"kubernetes.io/projected/fbe5f771-2b02-4d1d-93bb-9e59aa3723ad-kube-api-access-nm8m9\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.183107 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/021fb8fd-810b-4042-adfd-6ce50bcacbf0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "021fb8fd-810b-4042-adfd-6ce50bcacbf0" (UID: "021fb8fd-810b-4042-adfd-6ce50bcacbf0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.183448 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2654c2cc-3479-4c0c-89e3-26ecfeedb613-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2654c2cc-3479-4c0c-89e3-26ecfeedb613" (UID: "2654c2cc-3479-4c0c-89e3-26ecfeedb613"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.183926 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2eae26a-a2cb-4a25-b77c-021951cf33b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2eae26a-a2cb-4a25-b77c-021951cf33b3" (UID: "e2eae26a-a2cb-4a25-b77c-021951cf33b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.186813 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2654c2cc-3479-4c0c-89e3-26ecfeedb613-kube-api-access-4n485" (OuterVolumeSpecName: "kube-api-access-4n485") pod "2654c2cc-3479-4c0c-89e3-26ecfeedb613" (UID: "2654c2cc-3479-4c0c-89e3-26ecfeedb613"). InnerVolumeSpecName "kube-api-access-4n485". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.186888 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2eae26a-a2cb-4a25-b77c-021951cf33b3-kube-api-access-k7ksj" (OuterVolumeSpecName: "kube-api-access-k7ksj") pod "e2eae26a-a2cb-4a25-b77c-021951cf33b3" (UID: "e2eae26a-a2cb-4a25-b77c-021951cf33b3"). InnerVolumeSpecName "kube-api-access-k7ksj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.188201 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/021fb8fd-810b-4042-adfd-6ce50bcacbf0-kube-api-access-5lc5m" (OuterVolumeSpecName: "kube-api-access-5lc5m") pod "021fb8fd-810b-4042-adfd-6ce50bcacbf0" (UID: "021fb8fd-810b-4042-adfd-6ce50bcacbf0"). InnerVolumeSpecName "kube-api-access-5lc5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.284800 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/021fb8fd-810b-4042-adfd-6ce50bcacbf0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.284832 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2654c2cc-3479-4c0c-89e3-26ecfeedb613-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.284842 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lc5m\" (UniqueName: \"kubernetes.io/projected/021fb8fd-810b-4042-adfd-6ce50bcacbf0-kube-api-access-5lc5m\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.284854 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7ksj\" (UniqueName: \"kubernetes.io/projected/e2eae26a-a2cb-4a25-b77c-021951cf33b3-kube-api-access-k7ksj\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.284862 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2eae26a-a2cb-4a25-b77c-021951cf33b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:18 crc kubenswrapper[4844]: I0126 13:18:18.284870 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n485\" (UniqueName: \"kubernetes.io/projected/2654c2cc-3479-4c0c-89e3-26ecfeedb613-kube-api-access-4n485\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.127516 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-9jq8s"] Jan 26 13:18:23 crc kubenswrapper[4844]: E0126 13:18:23.128482 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c50a55-5ec7-41d8-a69a-607f0331039a" containerName="mariadb-database-create" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.128498 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c50a55-5ec7-41d8-a69a-607f0331039a" containerName="mariadb-database-create" Jan 26 13:18:23 crc kubenswrapper[4844]: E0126 13:18:23.128510 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55" containerName="mariadb-database-create" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.128519 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55" containerName="mariadb-database-create" Jan 26 13:18:23 crc kubenswrapper[4844]: E0126 13:18:23.128532 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2eae26a-a2cb-4a25-b77c-021951cf33b3" containerName="mariadb-database-create" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.128540 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2eae26a-a2cb-4a25-b77c-021951cf33b3" containerName="mariadb-database-create" Jan 26 13:18:23 crc kubenswrapper[4844]: E0126 13:18:23.128554 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe5f771-2b02-4d1d-93bb-9e59aa3723ad" containerName="mariadb-account-create-update" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.128563 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe5f771-2b02-4d1d-93bb-9e59aa3723ad" containerName="mariadb-account-create-update" Jan 26 13:18:23 crc kubenswrapper[4844]: E0126 13:18:23.128609 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2654c2cc-3479-4c0c-89e3-26ecfeedb613" containerName="mariadb-account-create-update" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.128618 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="2654c2cc-3479-4c0c-89e3-26ecfeedb613" containerName="mariadb-account-create-update" Jan 26 13:18:23 crc kubenswrapper[4844]: E0126 13:18:23.128630 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="021fb8fd-810b-4042-adfd-6ce50bcacbf0" containerName="mariadb-account-create-update" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.128639 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="021fb8fd-810b-4042-adfd-6ce50bcacbf0" containerName="mariadb-account-create-update" Jan 26 13:18:23 crc kubenswrapper[4844]: E0126 13:18:23.128648 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e08e4d13-48d4-434c-a816-b64d161f09be" containerName="mariadb-database-create" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.128656 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e08e4d13-48d4-434c-a816-b64d161f09be" containerName="mariadb-database-create" Jan 26 13:18:23 crc kubenswrapper[4844]: E0126 13:18:23.128672 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0033ca5-7b7d-464e-ba26-a59ca8f226fe" containerName="mariadb-account-create-update" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.128681 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0033ca5-7b7d-464e-ba26-a59ca8f226fe" containerName="mariadb-account-create-update" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.128945 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2eae26a-a2cb-4a25-b77c-021951cf33b3" containerName="mariadb-database-create" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.128984 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="021fb8fd-810b-4042-adfd-6ce50bcacbf0" containerName="mariadb-account-create-update" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.129016 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c50a55-5ec7-41d8-a69a-607f0331039a" containerName="mariadb-database-create" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.129040 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55" containerName="mariadb-database-create" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.129062 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="2654c2cc-3479-4c0c-89e3-26ecfeedb613" containerName="mariadb-account-create-update" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.129080 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="e08e4d13-48d4-434c-a816-b64d161f09be" containerName="mariadb-database-create" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.129112 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbe5f771-2b02-4d1d-93bb-9e59aa3723ad" containerName="mariadb-account-create-update" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.129150 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0033ca5-7b7d-464e-ba26-a59ca8f226fe" containerName="mariadb-account-create-update" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.129910 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9jq8s" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.133136 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5tdcs" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.133181 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.139507 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9jq8s"] Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.275225 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0ed764-c6f0-4580-89dd-4f6826df258d-combined-ca-bundle\") pod \"glance-db-sync-9jq8s\" (UID: \"ce0ed764-c6f0-4580-89dd-4f6826df258d\") " pod="openstack/glance-db-sync-9jq8s" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.275395 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg54t\" (UniqueName: \"kubernetes.io/projected/ce0ed764-c6f0-4580-89dd-4f6826df258d-kube-api-access-xg54t\") pod \"glance-db-sync-9jq8s\" (UID: \"ce0ed764-c6f0-4580-89dd-4f6826df258d\") " pod="openstack/glance-db-sync-9jq8s" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.275480 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce0ed764-c6f0-4580-89dd-4f6826df258d-db-sync-config-data\") pod \"glance-db-sync-9jq8s\" (UID: \"ce0ed764-c6f0-4580-89dd-4f6826df258d\") " pod="openstack/glance-db-sync-9jq8s" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.275546 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0ed764-c6f0-4580-89dd-4f6826df258d-config-data\") pod \"glance-db-sync-9jq8s\" (UID: \"ce0ed764-c6f0-4580-89dd-4f6826df258d\") " pod="openstack/glance-db-sync-9jq8s" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.377879 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0ed764-c6f0-4580-89dd-4f6826df258d-combined-ca-bundle\") pod \"glance-db-sync-9jq8s\" (UID: \"ce0ed764-c6f0-4580-89dd-4f6826df258d\") " pod="openstack/glance-db-sync-9jq8s" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.377950 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg54t\" (UniqueName: \"kubernetes.io/projected/ce0ed764-c6f0-4580-89dd-4f6826df258d-kube-api-access-xg54t\") pod \"glance-db-sync-9jq8s\" (UID: \"ce0ed764-c6f0-4580-89dd-4f6826df258d\") " pod="openstack/glance-db-sync-9jq8s" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.377980 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce0ed764-c6f0-4580-89dd-4f6826df258d-db-sync-config-data\") pod \"glance-db-sync-9jq8s\" (UID: \"ce0ed764-c6f0-4580-89dd-4f6826df258d\") " pod="openstack/glance-db-sync-9jq8s" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.378010 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0ed764-c6f0-4580-89dd-4f6826df258d-config-data\") pod \"glance-db-sync-9jq8s\" (UID: \"ce0ed764-c6f0-4580-89dd-4f6826df258d\") " pod="openstack/glance-db-sync-9jq8s" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.384447 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce0ed764-c6f0-4580-89dd-4f6826df258d-db-sync-config-data\") pod \"glance-db-sync-9jq8s\" (UID: \"ce0ed764-c6f0-4580-89dd-4f6826df258d\") " pod="openstack/glance-db-sync-9jq8s" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.385396 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0ed764-c6f0-4580-89dd-4f6826df258d-config-data\") pod \"glance-db-sync-9jq8s\" (UID: \"ce0ed764-c6f0-4580-89dd-4f6826df258d\") " pod="openstack/glance-db-sync-9jq8s" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.386684 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0ed764-c6f0-4580-89dd-4f6826df258d-combined-ca-bundle\") pod \"glance-db-sync-9jq8s\" (UID: \"ce0ed764-c6f0-4580-89dd-4f6826df258d\") " pod="openstack/glance-db-sync-9jq8s" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.399443 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg54t\" (UniqueName: \"kubernetes.io/projected/ce0ed764-c6f0-4580-89dd-4f6826df258d-kube-api-access-xg54t\") pod \"glance-db-sync-9jq8s\" (UID: \"ce0ed764-c6f0-4580-89dd-4f6826df258d\") " pod="openstack/glance-db-sync-9jq8s" Jan 26 13:18:23 crc kubenswrapper[4844]: I0126 13:18:23.450529 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9jq8s" Jan 26 13:18:27 crc kubenswrapper[4844]: E0126 13:18:27.993271 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 26 13:18:27 crc kubenswrapper[4844]: E0126 13:18:27.993650 4844 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 26 13:18:27 crc kubenswrapper[4844]: E0126 13:18:27.993777 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.9:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkdsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-5w9q7_openstack(db436f05-9b6d-4342-82d0-524c18fe6079): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:18:27 crc kubenswrapper[4844]: E0126 13:18:27.996139 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-5w9q7" podUID="db436f05-9b6d-4342-82d0-524c18fe6079" Jan 26 13:18:28 crc kubenswrapper[4844]: E0126 13:18:28.267040 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-5w9q7" podUID="db436f05-9b6d-4342-82d0-524c18fe6079" Jan 26 13:18:28 crc kubenswrapper[4844]: I0126 13:18:28.547871 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9jq8s"] Jan 26 13:18:28 crc kubenswrapper[4844]: W0126 13:18:28.548520 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce0ed764_c6f0_4580_89dd_4f6826df258d.slice/crio-c71c1d5dd7cc2a0189d7a738b3f5cf92ab74c0e569b5ae8130fd66cef0e77048 WatchSource:0}: Error finding container c71c1d5dd7cc2a0189d7a738b3f5cf92ab74c0e569b5ae8130fd66cef0e77048: Status 404 returned error can't find the container with id c71c1d5dd7cc2a0189d7a738b3f5cf92ab74c0e569b5ae8130fd66cef0e77048 Jan 26 13:18:29 crc kubenswrapper[4844]: I0126 13:18:29.274974 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-td22t" event={"ID":"0ca9f483-dabf-40a9-be25-312db82ffd23","Type":"ContainerStarted","Data":"435a540d0a169e47db4e9ee371b75ef04e541d1c3989937dc65c7d0d5c99f2fb"} Jan 26 13:18:29 crc kubenswrapper[4844]: I0126 13:18:29.277515 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9jq8s" event={"ID":"ce0ed764-c6f0-4580-89dd-4f6826df258d","Type":"ContainerStarted","Data":"c71c1d5dd7cc2a0189d7a738b3f5cf92ab74c0e569b5ae8130fd66cef0e77048"} Jan 26 13:18:29 crc kubenswrapper[4844]: I0126 13:18:29.291229 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-td22t" podStartSLOduration=2.433341465 podStartE2EDuration="19.29120751s" podCreationTimestamp="2026-01-26 13:18:10 +0000 UTC" firstStartedPulling="2026-01-26 13:18:11.115389102 +0000 UTC m=+2068.048756714" lastFinishedPulling="2026-01-26 13:18:27.973255147 +0000 UTC m=+2084.906622759" observedRunningTime="2026-01-26 13:18:29.289338434 +0000 UTC m=+2086.222706066" watchObservedRunningTime="2026-01-26 13:18:29.29120751 +0000 UTC m=+2086.224575122" Jan 26 13:18:37 crc kubenswrapper[4844]: I0126 13:18:37.349081 4844 generic.go:334] "Generic (PLEG): container finished" podID="0ca9f483-dabf-40a9-be25-312db82ffd23" containerID="435a540d0a169e47db4e9ee371b75ef04e541d1c3989937dc65c7d0d5c99f2fb" exitCode=0 Jan 26 13:18:37 crc kubenswrapper[4844]: I0126 13:18:37.349159 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-td22t" event={"ID":"0ca9f483-dabf-40a9-be25-312db82ffd23","Type":"ContainerDied","Data":"435a540d0a169e47db4e9ee371b75ef04e541d1c3989937dc65c7d0d5c99f2fb"} Jan 26 13:18:45 crc kubenswrapper[4844]: E0126 13:18:45.152448 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 26 13:18:45 crc kubenswrapper[4844]: E0126 13:18:45.152886 4844 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 26 13:18:45 crc kubenswrapper[4844]: E0126 13:18:45.152998 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.9:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xg54t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-9jq8s_openstack(ce0ed764-c6f0-4580-89dd-4f6826df258d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:18:45 crc kubenswrapper[4844]: E0126 13:18:45.154478 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-9jq8s" podUID="ce0ed764-c6f0-4580-89dd-4f6826df258d" Jan 26 13:18:45 crc kubenswrapper[4844]: I0126 13:18:45.185806 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-td22t" Jan 26 13:18:45 crc kubenswrapper[4844]: I0126 13:18:45.267696 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca9f483-dabf-40a9-be25-312db82ffd23-config-data\") pod \"0ca9f483-dabf-40a9-be25-312db82ffd23\" (UID: \"0ca9f483-dabf-40a9-be25-312db82ffd23\") " Jan 26 13:18:45 crc kubenswrapper[4844]: I0126 13:18:45.267798 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca9f483-dabf-40a9-be25-312db82ffd23-combined-ca-bundle\") pod \"0ca9f483-dabf-40a9-be25-312db82ffd23\" (UID: \"0ca9f483-dabf-40a9-be25-312db82ffd23\") " Jan 26 13:18:45 crc kubenswrapper[4844]: I0126 13:18:45.267945 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6k75\" (UniqueName: \"kubernetes.io/projected/0ca9f483-dabf-40a9-be25-312db82ffd23-kube-api-access-z6k75\") pod \"0ca9f483-dabf-40a9-be25-312db82ffd23\" (UID: \"0ca9f483-dabf-40a9-be25-312db82ffd23\") " Jan 26 13:18:45 crc kubenswrapper[4844]: I0126 13:18:45.275622 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ca9f483-dabf-40a9-be25-312db82ffd23-kube-api-access-z6k75" (OuterVolumeSpecName: "kube-api-access-z6k75") pod "0ca9f483-dabf-40a9-be25-312db82ffd23" (UID: "0ca9f483-dabf-40a9-be25-312db82ffd23"). InnerVolumeSpecName "kube-api-access-z6k75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:18:45 crc kubenswrapper[4844]: I0126 13:18:45.297005 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca9f483-dabf-40a9-be25-312db82ffd23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ca9f483-dabf-40a9-be25-312db82ffd23" (UID: "0ca9f483-dabf-40a9-be25-312db82ffd23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:18:45 crc kubenswrapper[4844]: I0126 13:18:45.327443 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca9f483-dabf-40a9-be25-312db82ffd23-config-data" (OuterVolumeSpecName: "config-data") pod "0ca9f483-dabf-40a9-be25-312db82ffd23" (UID: "0ca9f483-dabf-40a9-be25-312db82ffd23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:18:45 crc kubenswrapper[4844]: I0126 13:18:45.370084 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca9f483-dabf-40a9-be25-312db82ffd23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:45 crc kubenswrapper[4844]: I0126 13:18:45.370134 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6k75\" (UniqueName: \"kubernetes.io/projected/0ca9f483-dabf-40a9-be25-312db82ffd23-kube-api-access-z6k75\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:45 crc kubenswrapper[4844]: I0126 13:18:45.370150 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca9f483-dabf-40a9-be25-312db82ffd23-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:45 crc kubenswrapper[4844]: I0126 13:18:45.434540 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-td22t" event={"ID":"0ca9f483-dabf-40a9-be25-312db82ffd23","Type":"ContainerDied","Data":"b7a3c93966632e1570f2c1effbe096e1861f78871879ad6fa694256db0228608"} Jan 26 13:18:45 crc kubenswrapper[4844]: I0126 13:18:45.434632 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7a3c93966632e1570f2c1effbe096e1861f78871879ad6fa694256db0228608" Jan 26 13:18:45 crc kubenswrapper[4844]: I0126 13:18:45.434562 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-td22t" Jan 26 13:18:45 crc kubenswrapper[4844]: E0126 13:18:45.879091 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-9jq8s" podUID="ce0ed764-c6f0-4580-89dd-4f6826df258d" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.485445 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7j972"] Jan 26 13:18:46 crc kubenswrapper[4844]: E0126 13:18:46.486278 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca9f483-dabf-40a9-be25-312db82ffd23" containerName="keystone-db-sync" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.486296 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca9f483-dabf-40a9-be25-312db82ffd23" containerName="keystone-db-sync" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.486551 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ca9f483-dabf-40a9-be25-312db82ffd23" containerName="keystone-db-sync" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.487974 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.492709 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.492928 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.493077 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l6kd4" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.493274 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-5w9q7" event={"ID":"db436f05-9b6d-4342-82d0-524c18fe6079","Type":"ContainerStarted","Data":"b01bde1b77e6b4012bd36c236ff5cf164902b763ff25a61357efefa4c71f214c"} Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.494295 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.494463 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.504035 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58c6955b5f-f26sc"] Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.505923 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.529111 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7j972"] Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.544204 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58c6955b5f-f26sc"] Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.578577 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-5w9q7" podStartSLOduration=2.508544339 podStartE2EDuration="34.578560823s" podCreationTimestamp="2026-01-26 13:18:12 +0000 UTC" firstStartedPulling="2026-01-26 13:18:13.811798574 +0000 UTC m=+2070.745166186" lastFinishedPulling="2026-01-26 13:18:45.881815048 +0000 UTC m=+2102.815182670" observedRunningTime="2026-01-26 13:18:46.544063129 +0000 UTC m=+2103.477430761" watchObservedRunningTime="2026-01-26 13:18:46.578560823 +0000 UTC m=+2103.511928435" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.597895 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78gmn\" (UniqueName: \"kubernetes.io/projected/cbd86931-9c64-42e8-911a-f0a8044098c4-kube-api-access-78gmn\") pod \"keystone-bootstrap-7j972\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.597944 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-scripts\") pod \"keystone-bootstrap-7j972\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.597970 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-credential-keys\") pod \"keystone-bootstrap-7j972\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.597989 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-combined-ca-bundle\") pod \"keystone-bootstrap-7j972\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.598013 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-dns-swift-storage-0\") pod \"dnsmasq-dns-58c6955b5f-f26sc\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.598047 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-fernet-keys\") pod \"keystone-bootstrap-7j972\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.598064 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-ovsdbserver-nb\") pod \"dnsmasq-dns-58c6955b5f-f26sc\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.598097 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-config\") pod \"dnsmasq-dns-58c6955b5f-f26sc\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.598124 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-dns-svc\") pod \"dnsmasq-dns-58c6955b5f-f26sc\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.598142 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-ovsdbserver-sb\") pod \"dnsmasq-dns-58c6955b5f-f26sc\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.598215 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxnh5\" (UniqueName: \"kubernetes.io/projected/d18e836f-e7f3-4fb2-b0a7-9b4811172675-kube-api-access-fxnh5\") pod \"dnsmasq-dns-58c6955b5f-f26sc\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.598233 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-config-data\") pod \"keystone-bootstrap-7j972\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.638219 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7969695f59-rzz64"] Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.644685 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.648105 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.648302 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.648421 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.648545 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-jspjr" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.661088 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-dcfgm"] Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.662462 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.675215 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.675625 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.675855 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-swxz2" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.677319 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7969695f59-rzz64"] Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.684242 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dcfgm"] Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.700090 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4w9f\" (UniqueName: \"kubernetes.io/projected/1979816f-0e1c-427a-b6aa-97b147a4c622-kube-api-access-s4w9f\") pod \"horizon-7969695f59-rzz64\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.700354 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78gmn\" (UniqueName: \"kubernetes.io/projected/cbd86931-9c64-42e8-911a-f0a8044098c4-kube-api-access-78gmn\") pod \"keystone-bootstrap-7j972\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.700433 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1979816f-0e1c-427a-b6aa-97b147a4c622-logs\") pod \"horizon-7969695f59-rzz64\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.700508 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-scripts\") pod \"keystone-bootstrap-7j972\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.700631 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-credential-keys\") pod \"keystone-bootstrap-7j972\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.700709 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-combined-ca-bundle\") pod \"keystone-bootstrap-7j972\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.700777 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1979816f-0e1c-427a-b6aa-97b147a4c622-config-data\") pod \"horizon-7969695f59-rzz64\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.700847 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-dns-swift-storage-0\") pod \"dnsmasq-dns-58c6955b5f-f26sc\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.700925 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-scripts\") pod \"cinder-db-sync-dcfgm\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.701065 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-db-sync-config-data\") pod \"cinder-db-sync-dcfgm\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.701135 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-config-data\") pod \"cinder-db-sync-dcfgm\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.701204 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-fernet-keys\") pod \"keystone-bootstrap-7j972\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.701274 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-ovsdbserver-nb\") pod \"dnsmasq-dns-58c6955b5f-f26sc\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.701361 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1979816f-0e1c-427a-b6aa-97b147a4c622-scripts\") pod \"horizon-7969695f59-rzz64\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.701433 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-combined-ca-bundle\") pod \"cinder-db-sync-dcfgm\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.701503 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4pp4\" (UniqueName: \"kubernetes.io/projected/5f82260f-cde4-4197-8718-d7adebadeddb-kube-api-access-l4pp4\") pod \"cinder-db-sync-dcfgm\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.701579 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-config\") pod \"dnsmasq-dns-58c6955b5f-f26sc\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.701678 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-dns-svc\") pod \"dnsmasq-dns-58c6955b5f-f26sc\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.701751 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-ovsdbserver-sb\") pod \"dnsmasq-dns-58c6955b5f-f26sc\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.701816 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f82260f-cde4-4197-8718-d7adebadeddb-etc-machine-id\") pod \"cinder-db-sync-dcfgm\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.701912 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1979816f-0e1c-427a-b6aa-97b147a4c622-horizon-secret-key\") pod \"horizon-7969695f59-rzz64\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.702000 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxnh5\" (UniqueName: \"kubernetes.io/projected/d18e836f-e7f3-4fb2-b0a7-9b4811172675-kube-api-access-fxnh5\") pod \"dnsmasq-dns-58c6955b5f-f26sc\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.702068 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-config-data\") pod \"keystone-bootstrap-7j972\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.707078 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-combined-ca-bundle\") pod \"keystone-bootstrap-7j972\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.708711 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-ovsdbserver-nb\") pod \"dnsmasq-dns-58c6955b5f-f26sc\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.710339 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-fernet-keys\") pod \"keystone-bootstrap-7j972\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.711794 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-dns-swift-storage-0\") pod \"dnsmasq-dns-58c6955b5f-f26sc\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.712413 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-config\") pod \"dnsmasq-dns-58c6955b5f-f26sc\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.713063 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-dns-svc\") pod \"dnsmasq-dns-58c6955b5f-f26sc\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.713882 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-scripts\") pod \"keystone-bootstrap-7j972\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.714076 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-credential-keys\") pod \"keystone-bootstrap-7j972\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.720027 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-ovsdbserver-sb\") pod \"dnsmasq-dns-58c6955b5f-f26sc\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.730172 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-config-data\") pod \"keystone-bootstrap-7j972\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.731271 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78gmn\" (UniqueName: \"kubernetes.io/projected/cbd86931-9c64-42e8-911a-f0a8044098c4-kube-api-access-78gmn\") pod \"keystone-bootstrap-7j972\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.752344 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxnh5\" (UniqueName: \"kubernetes.io/projected/d18e836f-e7f3-4fb2-b0a7-9b4811172675-kube-api-access-fxnh5\") pod \"dnsmasq-dns-58c6955b5f-f26sc\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.777938 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.780052 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.794488 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.794842 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.805429 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-scripts\") pod \"cinder-db-sync-dcfgm\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.805577 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-db-sync-config-data\") pod \"cinder-db-sync-dcfgm\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.805665 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-config-data\") pod \"cinder-db-sync-dcfgm\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.805746 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1979816f-0e1c-427a-b6aa-97b147a4c622-scripts\") pod \"horizon-7969695f59-rzz64\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.805816 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-combined-ca-bundle\") pod \"cinder-db-sync-dcfgm\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.805877 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4pp4\" (UniqueName: \"kubernetes.io/projected/5f82260f-cde4-4197-8718-d7adebadeddb-kube-api-access-l4pp4\") pod \"cinder-db-sync-dcfgm\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.805944 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-config-data\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.806029 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f82260f-cde4-4197-8718-d7adebadeddb-etc-machine-id\") pod \"cinder-db-sync-dcfgm\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.806120 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jjpz\" (UniqueName: \"kubernetes.io/projected/ad438e4d-9282-48b8-88c1-1f974bb26b5e-kube-api-access-5jjpz\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.806207 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1979816f-0e1c-427a-b6aa-97b147a4c622-horizon-secret-key\") pod \"horizon-7969695f59-rzz64\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.806305 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad438e4d-9282-48b8-88c1-1f974bb26b5e-log-httpd\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.806372 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad438e4d-9282-48b8-88c1-1f974bb26b5e-run-httpd\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.806439 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4w9f\" (UniqueName: \"kubernetes.io/projected/1979816f-0e1c-427a-b6aa-97b147a4c622-kube-api-access-s4w9f\") pod \"horizon-7969695f59-rzz64\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.806506 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.806585 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1979816f-0e1c-427a-b6aa-97b147a4c622-logs\") pod \"horizon-7969695f59-rzz64\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.806691 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-scripts\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.806757 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.806822 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1979816f-0e1c-427a-b6aa-97b147a4c622-config-data\") pod \"horizon-7969695f59-rzz64\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.808005 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1979816f-0e1c-427a-b6aa-97b147a4c622-config-data\") pod \"horizon-7969695f59-rzz64\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.810924 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1979816f-0e1c-427a-b6aa-97b147a4c622-logs\") pod \"horizon-7969695f59-rzz64\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.814385 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-combined-ca-bundle\") pod \"cinder-db-sync-dcfgm\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.814828 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1979816f-0e1c-427a-b6aa-97b147a4c622-scripts\") pod \"horizon-7969695f59-rzz64\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.819524 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7j972" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.816234 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f82260f-cde4-4197-8718-d7adebadeddb-etc-machine-id\") pod \"cinder-db-sync-dcfgm\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.820108 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-scripts\") pod \"cinder-db-sync-dcfgm\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.824040 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-db-sync-config-data\") pod \"cinder-db-sync-dcfgm\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.834873 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.835470 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1979816f-0e1c-427a-b6aa-97b147a4c622-horizon-secret-key\") pod \"horizon-7969695f59-rzz64\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.848183 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-config-data\") pod \"cinder-db-sync-dcfgm\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.849534 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.849897 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4pp4\" (UniqueName: \"kubernetes.io/projected/5f82260f-cde4-4197-8718-d7adebadeddb-kube-api-access-l4pp4\") pod \"cinder-db-sync-dcfgm\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.868237 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4w9f\" (UniqueName: \"kubernetes.io/projected/1979816f-0e1c-427a-b6aa-97b147a4c622-kube-api-access-s4w9f\") pod \"horizon-7969695f59-rzz64\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.881689 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-q74n8"] Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.882747 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-q74n8" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.889240 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.889483 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.889617 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zt6jg" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.909583 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-2xnzf"] Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.910710 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2xnzf" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.913535 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-dzsvq" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.915086 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.918555 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad438e4d-9282-48b8-88c1-1f974bb26b5e-log-httpd\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.918906 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad438e4d-9282-48b8-88c1-1f974bb26b5e-run-httpd\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.918933 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.918967 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bdef7de-9499-45b9-b41e-a59882aa4423-combined-ca-bundle\") pod \"neutron-db-sync-q74n8\" (UID: \"4bdef7de-9499-45b9-b41e-a59882aa4423\") " pod="openstack/neutron-db-sync-q74n8" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.918976 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad438e4d-9282-48b8-88c1-1f974bb26b5e-log-httpd\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.919032 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-scripts\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.919059 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.919127 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-622zd\" (UniqueName: \"kubernetes.io/projected/4bdef7de-9499-45b9-b41e-a59882aa4423-kube-api-access-622zd\") pod \"neutron-db-sync-q74n8\" (UID: \"4bdef7de-9499-45b9-b41e-a59882aa4423\") " pod="openstack/neutron-db-sync-q74n8" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.919176 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-config-data\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.919189 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad438e4d-9282-48b8-88c1-1f974bb26b5e-run-httpd\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.919228 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jjpz\" (UniqueName: \"kubernetes.io/projected/ad438e4d-9282-48b8-88c1-1f974bb26b5e-kube-api-access-5jjpz\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.919260 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bdef7de-9499-45b9-b41e-a59882aa4423-config\") pod \"neutron-db-sync-q74n8\" (UID: \"4bdef7de-9499-45b9-b41e-a59882aa4423\") " pod="openstack/neutron-db-sync-q74n8" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.922348 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2xnzf"] Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.924677 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.927322 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.931300 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-config-data\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.935639 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-scripts\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.945750 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jjpz\" (UniqueName: \"kubernetes.io/projected/ad438e4d-9282-48b8-88c1-1f974bb26b5e-kube-api-access-5jjpz\") pod \"ceilometer-0\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " pod="openstack/ceilometer-0" Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.961921 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-q74n8"] Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.989740 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6775fbb8bf-p89r6"] Jan 26 13:18:46 crc kubenswrapper[4844]: I0126 13:18:46.990163 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.014711 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.021800 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjs6c\" (UniqueName: \"kubernetes.io/projected/43fe5130-0714-4f40-9d6a-9384eb72fa0a-kube-api-access-pjs6c\") pod \"barbican-db-sync-2xnzf\" (UID: \"43fe5130-0714-4f40-9d6a-9384eb72fa0a\") " pod="openstack/barbican-db-sync-2xnzf" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.021850 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43fe5130-0714-4f40-9d6a-9384eb72fa0a-combined-ca-bundle\") pod \"barbican-db-sync-2xnzf\" (UID: \"43fe5130-0714-4f40-9d6a-9384eb72fa0a\") " pod="openstack/barbican-db-sync-2xnzf" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.021885 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-622zd\" (UniqueName: \"kubernetes.io/projected/4bdef7de-9499-45b9-b41e-a59882aa4423-kube-api-access-622zd\") pod \"neutron-db-sync-q74n8\" (UID: \"4bdef7de-9499-45b9-b41e-a59882aa4423\") " pod="openstack/neutron-db-sync-q74n8" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.021954 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be9958f1-c7db-4c90-9f58-7dee7e86e728-horizon-secret-key\") pod \"horizon-6775fbb8bf-p89r6\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.022019 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be9958f1-c7db-4c90-9f58-7dee7e86e728-scripts\") pod \"horizon-6775fbb8bf-p89r6\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.022076 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bdef7de-9499-45b9-b41e-a59882aa4423-config\") pod \"neutron-db-sync-q74n8\" (UID: \"4bdef7de-9499-45b9-b41e-a59882aa4423\") " pod="openstack/neutron-db-sync-q74n8" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.022115 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdkf2\" (UniqueName: \"kubernetes.io/projected/be9958f1-c7db-4c90-9f58-7dee7e86e728-kube-api-access-kdkf2\") pod \"horizon-6775fbb8bf-p89r6\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.022167 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be9958f1-c7db-4c90-9f58-7dee7e86e728-config-data\") pod \"horizon-6775fbb8bf-p89r6\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.022867 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bdef7de-9499-45b9-b41e-a59882aa4423-combined-ca-bundle\") pod \"neutron-db-sync-q74n8\" (UID: \"4bdef7de-9499-45b9-b41e-a59882aa4423\") " pod="openstack/neutron-db-sync-q74n8" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.022918 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be9958f1-c7db-4c90-9f58-7dee7e86e728-logs\") pod \"horizon-6775fbb8bf-p89r6\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.022970 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43fe5130-0714-4f40-9d6a-9384eb72fa0a-db-sync-config-data\") pod \"barbican-db-sync-2xnzf\" (UID: \"43fe5130-0714-4f40-9d6a-9384eb72fa0a\") " pod="openstack/barbican-db-sync-2xnzf" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.032332 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bdef7de-9499-45b9-b41e-a59882aa4423-config\") pod \"neutron-db-sync-q74n8\" (UID: \"4bdef7de-9499-45b9-b41e-a59882aa4423\") " pod="openstack/neutron-db-sync-q74n8" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.041350 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6775fbb8bf-p89r6"] Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.062840 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-622zd\" (UniqueName: \"kubernetes.io/projected/4bdef7de-9499-45b9-b41e-a59882aa4423-kube-api-access-622zd\") pod \"neutron-db-sync-q74n8\" (UID: \"4bdef7de-9499-45b9-b41e-a59882aa4423\") " pod="openstack/neutron-db-sync-q74n8" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.064229 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bdef7de-9499-45b9-b41e-a59882aa4423-combined-ca-bundle\") pod \"neutron-db-sync-q74n8\" (UID: \"4bdef7de-9499-45b9-b41e-a59882aa4423\") " pod="openstack/neutron-db-sync-q74n8" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.128218 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.139423 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdkf2\" (UniqueName: \"kubernetes.io/projected/be9958f1-c7db-4c90-9f58-7dee7e86e728-kube-api-access-kdkf2\") pod \"horizon-6775fbb8bf-p89r6\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.139493 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be9958f1-c7db-4c90-9f58-7dee7e86e728-config-data\") pod \"horizon-6775fbb8bf-p89r6\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.139526 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be9958f1-c7db-4c90-9f58-7dee7e86e728-logs\") pod \"horizon-6775fbb8bf-p89r6\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.139557 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43fe5130-0714-4f40-9d6a-9384eb72fa0a-db-sync-config-data\") pod \"barbican-db-sync-2xnzf\" (UID: \"43fe5130-0714-4f40-9d6a-9384eb72fa0a\") " pod="openstack/barbican-db-sync-2xnzf" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.139968 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjs6c\" (UniqueName: \"kubernetes.io/projected/43fe5130-0714-4f40-9d6a-9384eb72fa0a-kube-api-access-pjs6c\") pod \"barbican-db-sync-2xnzf\" (UID: \"43fe5130-0714-4f40-9d6a-9384eb72fa0a\") " pod="openstack/barbican-db-sync-2xnzf" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.139998 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43fe5130-0714-4f40-9d6a-9384eb72fa0a-combined-ca-bundle\") pod \"barbican-db-sync-2xnzf\" (UID: \"43fe5130-0714-4f40-9d6a-9384eb72fa0a\") " pod="openstack/barbican-db-sync-2xnzf" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.140054 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be9958f1-c7db-4c90-9f58-7dee7e86e728-scripts\") pod \"horizon-6775fbb8bf-p89r6\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.140077 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be9958f1-c7db-4c90-9f58-7dee7e86e728-horizon-secret-key\") pod \"horizon-6775fbb8bf-p89r6\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.142299 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be9958f1-c7db-4c90-9f58-7dee7e86e728-logs\") pod \"horizon-6775fbb8bf-p89r6\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.142515 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be9958f1-c7db-4c90-9f58-7dee7e86e728-config-data\") pod \"horizon-6775fbb8bf-p89r6\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.144000 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be9958f1-c7db-4c90-9f58-7dee7e86e728-scripts\") pod \"horizon-6775fbb8bf-p89r6\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.221353 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43fe5130-0714-4f40-9d6a-9384eb72fa0a-combined-ca-bundle\") pod \"barbican-db-sync-2xnzf\" (UID: \"43fe5130-0714-4f40-9d6a-9384eb72fa0a\") " pod="openstack/barbican-db-sync-2xnzf" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.221855 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43fe5130-0714-4f40-9d6a-9384eb72fa0a-db-sync-config-data\") pod \"barbican-db-sync-2xnzf\" (UID: \"43fe5130-0714-4f40-9d6a-9384eb72fa0a\") " pod="openstack/barbican-db-sync-2xnzf" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.222532 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-q74n8" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.222611 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.227204 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjs6c\" (UniqueName: \"kubernetes.io/projected/43fe5130-0714-4f40-9d6a-9384eb72fa0a-kube-api-access-pjs6c\") pod \"barbican-db-sync-2xnzf\" (UID: \"43fe5130-0714-4f40-9d6a-9384eb72fa0a\") " pod="openstack/barbican-db-sync-2xnzf" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.233324 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be9958f1-c7db-4c90-9f58-7dee7e86e728-horizon-secret-key\") pod \"horizon-6775fbb8bf-p89r6\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.235331 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdkf2\" (UniqueName: \"kubernetes.io/projected/be9958f1-c7db-4c90-9f58-7dee7e86e728-kube-api-access-kdkf2\") pod \"horizon-6775fbb8bf-p89r6\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.249178 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2xnzf" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.361439 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58c6955b5f-f26sc"] Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.361479 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-bt68v"] Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.362578 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bt68v"] Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.362615 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7c497879-k82c9"] Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.363833 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.364236 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bt68v" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.364941 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7c497879-k82c9"] Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.367022 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.367702 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.367769 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jwq7d" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.384284 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.446229 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847c2c6b-16a5-4c1d-9122-81accf513fb4-config-data\") pod \"placement-db-sync-bt68v\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " pod="openstack/placement-db-sync-bt68v" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.446312 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7c497879-k82c9\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.446357 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/847c2c6b-16a5-4c1d-9122-81accf513fb4-logs\") pod \"placement-db-sync-bt68v\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " pod="openstack/placement-db-sync-bt68v" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.446386 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847c2c6b-16a5-4c1d-9122-81accf513fb4-combined-ca-bundle\") pod \"placement-db-sync-bt68v\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " pod="openstack/placement-db-sync-bt68v" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.446426 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/847c2c6b-16a5-4c1d-9122-81accf513fb4-scripts\") pod \"placement-db-sync-bt68v\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " pod="openstack/placement-db-sync-bt68v" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.446457 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7c497879-k82c9\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.446940 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7c497879-k82c9\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.447036 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-659hs\" (UniqueName: \"kubernetes.io/projected/188e9259-51a6-4775-a1a5-ccf2f736513c-kube-api-access-659hs\") pod \"dnsmasq-dns-5c7c497879-k82c9\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.447058 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rppll\" (UniqueName: \"kubernetes.io/projected/847c2c6b-16a5-4c1d-9122-81accf513fb4-kube-api-access-rppll\") pod \"placement-db-sync-bt68v\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " pod="openstack/placement-db-sync-bt68v" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.447099 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-dns-svc\") pod \"dnsmasq-dns-5c7c497879-k82c9\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.447133 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-config\") pod \"dnsmasq-dns-5c7c497879-k82c9\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.549507 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847c2c6b-16a5-4c1d-9122-81accf513fb4-config-data\") pod \"placement-db-sync-bt68v\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " pod="openstack/placement-db-sync-bt68v" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.549564 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7c497879-k82c9\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.549617 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/847c2c6b-16a5-4c1d-9122-81accf513fb4-logs\") pod \"placement-db-sync-bt68v\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " pod="openstack/placement-db-sync-bt68v" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.549643 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847c2c6b-16a5-4c1d-9122-81accf513fb4-combined-ca-bundle\") pod \"placement-db-sync-bt68v\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " pod="openstack/placement-db-sync-bt68v" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.549670 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/847c2c6b-16a5-4c1d-9122-81accf513fb4-scripts\") pod \"placement-db-sync-bt68v\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " pod="openstack/placement-db-sync-bt68v" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.549698 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7c497879-k82c9\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.549734 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7c497879-k82c9\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.549780 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-659hs\" (UniqueName: \"kubernetes.io/projected/188e9259-51a6-4775-a1a5-ccf2f736513c-kube-api-access-659hs\") pod \"dnsmasq-dns-5c7c497879-k82c9\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.549798 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rppll\" (UniqueName: \"kubernetes.io/projected/847c2c6b-16a5-4c1d-9122-81accf513fb4-kube-api-access-rppll\") pod \"placement-db-sync-bt68v\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " pod="openstack/placement-db-sync-bt68v" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.549820 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-dns-svc\") pod \"dnsmasq-dns-5c7c497879-k82c9\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.549847 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-config\") pod \"dnsmasq-dns-5c7c497879-k82c9\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.550641 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-config\") pod \"dnsmasq-dns-5c7c497879-k82c9\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.552052 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/847c2c6b-16a5-4c1d-9122-81accf513fb4-logs\") pod \"placement-db-sync-bt68v\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " pod="openstack/placement-db-sync-bt68v" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.552908 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7c497879-k82c9\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.553807 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7c497879-k82c9\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.555904 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-dns-svc\") pod \"dnsmasq-dns-5c7c497879-k82c9\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.556869 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847c2c6b-16a5-4c1d-9122-81accf513fb4-config-data\") pod \"placement-db-sync-bt68v\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " pod="openstack/placement-db-sync-bt68v" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.556937 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7c497879-k82c9\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.564755 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/847c2c6b-16a5-4c1d-9122-81accf513fb4-scripts\") pod \"placement-db-sync-bt68v\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " pod="openstack/placement-db-sync-bt68v" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.564911 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847c2c6b-16a5-4c1d-9122-81accf513fb4-combined-ca-bundle\") pod \"placement-db-sync-bt68v\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " pod="openstack/placement-db-sync-bt68v" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.569231 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-659hs\" (UniqueName: \"kubernetes.io/projected/188e9259-51a6-4775-a1a5-ccf2f736513c-kube-api-access-659hs\") pod \"dnsmasq-dns-5c7c497879-k82c9\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.575570 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rppll\" (UniqueName: \"kubernetes.io/projected/847c2c6b-16a5-4c1d-9122-81accf513fb4-kube-api-access-rppll\") pod \"placement-db-sync-bt68v\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " pod="openstack/placement-db-sync-bt68v" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.641942 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58c6955b5f-f26sc"] Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.705582 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.717839 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bt68v" Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.777130 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7j972"] Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.910248 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7969695f59-rzz64"] Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.949960 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2xnzf"] Jan 26 13:18:47 crc kubenswrapper[4844]: W0126 13:18:47.958513 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad438e4d_9282_48b8_88c1_1f974bb26b5e.slice/crio-7aabdc5d49ef87406650e65bcacb949345daafa854c88fa8e3e3622a43829aa8 WatchSource:0}: Error finding container 7aabdc5d49ef87406650e65bcacb949345daafa854c88fa8e3e3622a43829aa8: Status 404 returned error can't find the container with id 7aabdc5d49ef87406650e65bcacb949345daafa854c88fa8e3e3622a43829aa8 Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.963252 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:18:47 crc kubenswrapper[4844]: W0126 13:18:47.968361 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43fe5130_0714_4f40_9d6a_9384eb72fa0a.slice/crio-17d0e08fa3d49eb72b7bb19d2d9180f46f5752d37fdfe0559f596dc57f039192 WatchSource:0}: Error finding container 17d0e08fa3d49eb72b7bb19d2d9180f46f5752d37fdfe0559f596dc57f039192: Status 404 returned error can't find the container with id 17d0e08fa3d49eb72b7bb19d2d9180f46f5752d37fdfe0559f596dc57f039192 Jan 26 13:18:47 crc kubenswrapper[4844]: I0126 13:18:47.978040 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dcfgm"] Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.067233 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-q74n8"] Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.089919 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6775fbb8bf-p89r6"] Jan 26 13:18:48 crc kubenswrapper[4844]: W0126 13:18:48.112686 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe9958f1_c7db_4c90_9f58_7dee7e86e728.slice/crio-2e5d5a38f9514185a659ea55fd2065500346f4aed01a27c51c919cd93e76608b WatchSource:0}: Error finding container 2e5d5a38f9514185a659ea55fd2065500346f4aed01a27c51c919cd93e76608b: Status 404 returned error can't find the container with id 2e5d5a38f9514185a659ea55fd2065500346f4aed01a27c51c919cd93e76608b Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.230136 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bt68v"] Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.352157 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7c497879-k82c9"] Jan 26 13:18:48 crc kubenswrapper[4844]: W0126 13:18:48.394483 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod188e9259_51a6_4775_a1a5_ccf2f736513c.slice/crio-c2e6ac6ed2a15df6482bed47e0194e18748e40725ed08e4d7662a28b16bcb4cb WatchSource:0}: Error finding container c2e6ac6ed2a15df6482bed47e0194e18748e40725ed08e4d7662a28b16bcb4cb: Status 404 returned error can't find the container with id c2e6ac6ed2a15df6482bed47e0194e18748e40725ed08e4d7662a28b16bcb4cb Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.535403 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bt68v" event={"ID":"847c2c6b-16a5-4c1d-9122-81accf513fb4","Type":"ContainerStarted","Data":"5f7f3e5c6f941c49552d4bc5d5794b79e5b879305f9a61b25d70cf5ebffcd088"} Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.537328 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2xnzf" event={"ID":"43fe5130-0714-4f40-9d6a-9384eb72fa0a","Type":"ContainerStarted","Data":"17d0e08fa3d49eb72b7bb19d2d9180f46f5752d37fdfe0559f596dc57f039192"} Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.539860 4844 generic.go:334] "Generic (PLEG): container finished" podID="d18e836f-e7f3-4fb2-b0a7-9b4811172675" containerID="549fa7f254ecf4b70a47154e7745caecd55204a7d2e813cb8ed002b273dec5eb" exitCode=0 Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.539920 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" event={"ID":"d18e836f-e7f3-4fb2-b0a7-9b4811172675","Type":"ContainerDied","Data":"549fa7f254ecf4b70a47154e7745caecd55204a7d2e813cb8ed002b273dec5eb"} Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.539936 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" event={"ID":"d18e836f-e7f3-4fb2-b0a7-9b4811172675","Type":"ContainerStarted","Data":"7b656a799ac8314f5c2f4c7b689e7267669963a9fca01a2152aa50d408539318"} Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.542210 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7c497879-k82c9" event={"ID":"188e9259-51a6-4775-a1a5-ccf2f736513c","Type":"ContainerStarted","Data":"c2e6ac6ed2a15df6482bed47e0194e18748e40725ed08e4d7662a28b16bcb4cb"} Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.544049 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7j972" event={"ID":"cbd86931-9c64-42e8-911a-f0a8044098c4","Type":"ContainerStarted","Data":"2e9a84ce2b53137dcc0b605e1c8934f3ec81c8d1af469de9901b79a1914dbeb8"} Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.544101 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7j972" event={"ID":"cbd86931-9c64-42e8-911a-f0a8044098c4","Type":"ContainerStarted","Data":"b51ef72111288b2931958657b96e8f5164e3df4f2535eb6aa293108deb84e3f3"} Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.572580 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6775fbb8bf-p89r6" event={"ID":"be9958f1-c7db-4c90-9f58-7dee7e86e728","Type":"ContainerStarted","Data":"2e5d5a38f9514185a659ea55fd2065500346f4aed01a27c51c919cd93e76608b"} Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.577371 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-q74n8" event={"ID":"4bdef7de-9499-45b9-b41e-a59882aa4423","Type":"ContainerStarted","Data":"e46349bcce0b54334384e3d03bad2749ab306c1b6ca6446909a73481cb61b1fe"} Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.577425 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-q74n8" event={"ID":"4bdef7de-9499-45b9-b41e-a59882aa4423","Type":"ContainerStarted","Data":"9c6dc9b7f0467e4777f9265925fd4cfabe838b26c5879cc8d43ae8f0a5d4a2ac"} Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.581208 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dcfgm" event={"ID":"5f82260f-cde4-4197-8718-d7adebadeddb","Type":"ContainerStarted","Data":"4295f350c902c1c377adffaff456a405eaa6f667d3beb33a96f63233a55ae5d6"} Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.584736 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7969695f59-rzz64" event={"ID":"1979816f-0e1c-427a-b6aa-97b147a4c622","Type":"ContainerStarted","Data":"7cdfffd9d1334cf918aa5aa012b23c0848139fe82c289cd9481197f9ab151f3f"} Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.586373 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad438e4d-9282-48b8-88c1-1f974bb26b5e","Type":"ContainerStarted","Data":"7aabdc5d49ef87406650e65bcacb949345daafa854c88fa8e3e3622a43829aa8"} Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.599971 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7j972" podStartSLOduration=2.599950121 podStartE2EDuration="2.599950121s" podCreationTimestamp="2026-01-26 13:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:18:48.593538457 +0000 UTC m=+2105.526906069" watchObservedRunningTime="2026-01-26 13:18:48.599950121 +0000 UTC m=+2105.533317733" Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.614418 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-q74n8" podStartSLOduration=2.614401041 podStartE2EDuration="2.614401041s" podCreationTimestamp="2026-01-26 13:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:18:48.611695656 +0000 UTC m=+2105.545063278" watchObservedRunningTime="2026-01-26 13:18:48.614401041 +0000 UTC m=+2105.547768653" Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.889881 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6775fbb8bf-p89r6"] Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.927465 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-558494644f-wks4g"] Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.929144 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-558494644f-wks4g" Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.943157 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-558494644f-wks4g"] Jan 26 13:18:48 crc kubenswrapper[4844]: I0126 13:18:48.964059 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.008187 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.090077 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-logs\") pod \"horizon-558494644f-wks4g\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " pod="openstack/horizon-558494644f-wks4g" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.090239 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-horizon-secret-key\") pod \"horizon-558494644f-wks4g\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " pod="openstack/horizon-558494644f-wks4g" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.091681 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-config-data\") pod \"horizon-558494644f-wks4g\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " pod="openstack/horizon-558494644f-wks4g" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.091824 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f66g9\" (UniqueName: \"kubernetes.io/projected/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-kube-api-access-f66g9\") pod \"horizon-558494644f-wks4g\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " pod="openstack/horizon-558494644f-wks4g" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.091896 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-scripts\") pod \"horizon-558494644f-wks4g\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " pod="openstack/horizon-558494644f-wks4g" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.193192 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-ovsdbserver-nb\") pod \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.193242 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-config\") pod \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.193267 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxnh5\" (UniqueName: \"kubernetes.io/projected/d18e836f-e7f3-4fb2-b0a7-9b4811172675-kube-api-access-fxnh5\") pod \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.193295 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-dns-svc\") pod \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.193378 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-ovsdbserver-sb\") pod \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.193422 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-dns-swift-storage-0\") pod \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\" (UID: \"d18e836f-e7f3-4fb2-b0a7-9b4811172675\") " Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.194234 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-logs\") pod \"horizon-558494644f-wks4g\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " pod="openstack/horizon-558494644f-wks4g" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.194338 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-horizon-secret-key\") pod \"horizon-558494644f-wks4g\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " pod="openstack/horizon-558494644f-wks4g" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.194393 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-config-data\") pod \"horizon-558494644f-wks4g\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " pod="openstack/horizon-558494644f-wks4g" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.194432 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f66g9\" (UniqueName: \"kubernetes.io/projected/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-kube-api-access-f66g9\") pod \"horizon-558494644f-wks4g\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " pod="openstack/horizon-558494644f-wks4g" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.194459 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-scripts\") pod \"horizon-558494644f-wks4g\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " pod="openstack/horizon-558494644f-wks4g" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.194885 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-logs\") pod \"horizon-558494644f-wks4g\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " pod="openstack/horizon-558494644f-wks4g" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.195356 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-scripts\") pod \"horizon-558494644f-wks4g\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " pod="openstack/horizon-558494644f-wks4g" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.198177 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-config-data\") pod \"horizon-558494644f-wks4g\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " pod="openstack/horizon-558494644f-wks4g" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.205429 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18e836f-e7f3-4fb2-b0a7-9b4811172675-kube-api-access-fxnh5" (OuterVolumeSpecName: "kube-api-access-fxnh5") pod "d18e836f-e7f3-4fb2-b0a7-9b4811172675" (UID: "d18e836f-e7f3-4fb2-b0a7-9b4811172675"). InnerVolumeSpecName "kube-api-access-fxnh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.216699 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f66g9\" (UniqueName: \"kubernetes.io/projected/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-kube-api-access-f66g9\") pod \"horizon-558494644f-wks4g\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " pod="openstack/horizon-558494644f-wks4g" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.220391 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-horizon-secret-key\") pod \"horizon-558494644f-wks4g\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " pod="openstack/horizon-558494644f-wks4g" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.227106 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d18e836f-e7f3-4fb2-b0a7-9b4811172675" (UID: "d18e836f-e7f3-4fb2-b0a7-9b4811172675"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.232161 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d18e836f-e7f3-4fb2-b0a7-9b4811172675" (UID: "d18e836f-e7f3-4fb2-b0a7-9b4811172675"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.235872 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-config" (OuterVolumeSpecName: "config") pod "d18e836f-e7f3-4fb2-b0a7-9b4811172675" (UID: "d18e836f-e7f3-4fb2-b0a7-9b4811172675"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.239110 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d18e836f-e7f3-4fb2-b0a7-9b4811172675" (UID: "d18e836f-e7f3-4fb2-b0a7-9b4811172675"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.247395 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d18e836f-e7f3-4fb2-b0a7-9b4811172675" (UID: "d18e836f-e7f3-4fb2-b0a7-9b4811172675"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.291287 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-558494644f-wks4g" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.295981 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.296001 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.296011 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxnh5\" (UniqueName: \"kubernetes.io/projected/d18e836f-e7f3-4fb2-b0a7-9b4811172675-kube-api-access-fxnh5\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.296021 4844 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.296031 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.296039 4844 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d18e836f-e7f3-4fb2-b0a7-9b4811172675-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.596714 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" event={"ID":"d18e836f-e7f3-4fb2-b0a7-9b4811172675","Type":"ContainerDied","Data":"7b656a799ac8314f5c2f4c7b689e7267669963a9fca01a2152aa50d408539318"} Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.596762 4844 scope.go:117] "RemoveContainer" containerID="549fa7f254ecf4b70a47154e7745caecd55204a7d2e813cb8ed002b273dec5eb" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.596875 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58c6955b5f-f26sc" Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.601523 4844 generic.go:334] "Generic (PLEG): container finished" podID="188e9259-51a6-4775-a1a5-ccf2f736513c" containerID="af21c2810e4044591f086410d0124cdae8e8a36091592c3abcf685476f14e128" exitCode=0 Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.601625 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7c497879-k82c9" event={"ID":"188e9259-51a6-4775-a1a5-ccf2f736513c","Type":"ContainerDied","Data":"af21c2810e4044591f086410d0124cdae8e8a36091592c3abcf685476f14e128"} Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.669191 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58c6955b5f-f26sc"] Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.678578 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58c6955b5f-f26sc"] Jan 26 13:18:49 crc kubenswrapper[4844]: I0126 13:18:49.751857 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-558494644f-wks4g"] Jan 26 13:18:49 crc kubenswrapper[4844]: W0126 13:18:49.843998 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cd1a1b9_d504_45dc_a3d1_5b1c3bed85a4.slice/crio-24bf12647a19506ff75a37e962e9e6faa13331152b5b622c8f521a1bbe901549 WatchSource:0}: Error finding container 24bf12647a19506ff75a37e962e9e6faa13331152b5b622c8f521a1bbe901549: Status 404 returned error can't find the container with id 24bf12647a19506ff75a37e962e9e6faa13331152b5b622c8f521a1bbe901549 Jan 26 13:18:50 crc kubenswrapper[4844]: I0126 13:18:50.617011 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-558494644f-wks4g" event={"ID":"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4","Type":"ContainerStarted","Data":"24bf12647a19506ff75a37e962e9e6faa13331152b5b622c8f521a1bbe901549"} Jan 26 13:18:50 crc kubenswrapper[4844]: I0126 13:18:50.620165 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7c497879-k82c9" event={"ID":"188e9259-51a6-4775-a1a5-ccf2f736513c","Type":"ContainerStarted","Data":"4d8eab6c984410c439f9b97b7a03a8145b09a746e9069a5e1302b5095013402c"} Jan 26 13:18:51 crc kubenswrapper[4844]: I0126 13:18:51.330009 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d18e836f-e7f3-4fb2-b0a7-9b4811172675" path="/var/lib/kubelet/pods/d18e836f-e7f3-4fb2-b0a7-9b4811172675/volumes" Jan 26 13:18:52 crc kubenswrapper[4844]: I0126 13:18:52.641072 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:52 crc kubenswrapper[4844]: I0126 13:18:52.667989 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c7c497879-k82c9" podStartSLOduration=5.667966079 podStartE2EDuration="5.667966079s" podCreationTimestamp="2026-01-26 13:18:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:18:52.658469079 +0000 UTC m=+2109.591836701" watchObservedRunningTime="2026-01-26 13:18:52.667966079 +0000 UTC m=+2109.601333731" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.515100 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7969695f59-rzz64"] Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.571002 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f984df9c6-m8lct"] Jan 26 13:18:55 crc kubenswrapper[4844]: E0126 13:18:55.571706 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18e836f-e7f3-4fb2-b0a7-9b4811172675" containerName="init" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.571722 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18e836f-e7f3-4fb2-b0a7-9b4811172675" containerName="init" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.571976 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18e836f-e7f3-4fb2-b0a7-9b4811172675" containerName="init" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.573575 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.575364 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.593745 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f984df9c6-m8lct"] Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.608411 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-558494644f-wks4g"] Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.633818 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77c8bf8786-w82f7"] Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.635375 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.640897 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77c8bf8786-w82f7"] Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.645973 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz8sp\" (UniqueName: \"kubernetes.io/projected/2f336c66-c9c1-4764-8f55-a6fd70f01790-kube-api-access-qz8sp\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.646056 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0edac82-6db3-481f-8c9e-8826b5aac863-scripts\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.646162 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0edac82-6db3-481f-8c9e-8826b5aac863-horizon-tls-certs\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.646195 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f336c66-c9c1-4764-8f55-a6fd70f01790-combined-ca-bundle\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.646235 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0edac82-6db3-481f-8c9e-8826b5aac863-logs\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.646342 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f336c66-c9c1-4764-8f55-a6fd70f01790-scripts\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.646396 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0edac82-6db3-481f-8c9e-8826b5aac863-config-data\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.646419 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a0edac82-6db3-481f-8c9e-8826b5aac863-horizon-secret-key\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.646481 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xtxl\" (UniqueName: \"kubernetes.io/projected/a0edac82-6db3-481f-8c9e-8826b5aac863-kube-api-access-4xtxl\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.646616 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f336c66-c9c1-4764-8f55-a6fd70f01790-logs\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.646669 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f336c66-c9c1-4764-8f55-a6fd70f01790-horizon-tls-certs\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.646753 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0edac82-6db3-481f-8c9e-8826b5aac863-combined-ca-bundle\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.646803 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f336c66-c9c1-4764-8f55-a6fd70f01790-horizon-secret-key\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.646821 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f336c66-c9c1-4764-8f55-a6fd70f01790-config-data\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.747398 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f336c66-c9c1-4764-8f55-a6fd70f01790-logs\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.747444 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f336c66-c9c1-4764-8f55-a6fd70f01790-horizon-tls-certs\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.747476 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0edac82-6db3-481f-8c9e-8826b5aac863-combined-ca-bundle\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.747498 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f336c66-c9c1-4764-8f55-a6fd70f01790-horizon-secret-key\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.747512 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f336c66-c9c1-4764-8f55-a6fd70f01790-config-data\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.747548 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz8sp\" (UniqueName: \"kubernetes.io/projected/2f336c66-c9c1-4764-8f55-a6fd70f01790-kube-api-access-qz8sp\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.747591 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0edac82-6db3-481f-8c9e-8826b5aac863-scripts\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.747630 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0edac82-6db3-481f-8c9e-8826b5aac863-horizon-tls-certs\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.747646 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f336c66-c9c1-4764-8f55-a6fd70f01790-combined-ca-bundle\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.747664 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0edac82-6db3-481f-8c9e-8826b5aac863-logs\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.747697 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f336c66-c9c1-4764-8f55-a6fd70f01790-scripts\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.747721 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0edac82-6db3-481f-8c9e-8826b5aac863-config-data\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.747736 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a0edac82-6db3-481f-8c9e-8826b5aac863-horizon-secret-key\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.747759 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xtxl\" (UniqueName: \"kubernetes.io/projected/a0edac82-6db3-481f-8c9e-8826b5aac863-kube-api-access-4xtxl\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.749176 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f336c66-c9c1-4764-8f55-a6fd70f01790-logs\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.749280 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f336c66-c9c1-4764-8f55-a6fd70f01790-scripts\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.749502 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0edac82-6db3-481f-8c9e-8826b5aac863-config-data\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.749912 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0edac82-6db3-481f-8c9e-8826b5aac863-scripts\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.750240 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f336c66-c9c1-4764-8f55-a6fd70f01790-config-data\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.752751 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0edac82-6db3-481f-8c9e-8826b5aac863-logs\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.754695 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a0edac82-6db3-481f-8c9e-8826b5aac863-horizon-secret-key\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.754893 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f336c66-c9c1-4764-8f55-a6fd70f01790-horizon-tls-certs\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.755125 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0edac82-6db3-481f-8c9e-8826b5aac863-horizon-tls-certs\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.755152 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0edac82-6db3-481f-8c9e-8826b5aac863-combined-ca-bundle\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.769724 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f336c66-c9c1-4764-8f55-a6fd70f01790-horizon-secret-key\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.771319 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f336c66-c9c1-4764-8f55-a6fd70f01790-combined-ca-bundle\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.774579 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xtxl\" (UniqueName: \"kubernetes.io/projected/a0edac82-6db3-481f-8c9e-8826b5aac863-kube-api-access-4xtxl\") pod \"horizon-77c8bf8786-w82f7\" (UID: \"a0edac82-6db3-481f-8c9e-8826b5aac863\") " pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.774994 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz8sp\" (UniqueName: \"kubernetes.io/projected/2f336c66-c9c1-4764-8f55-a6fd70f01790-kube-api-access-qz8sp\") pod \"horizon-f984df9c6-m8lct\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.912070 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:18:55 crc kubenswrapper[4844]: I0126 13:18:55.966114 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:18:57 crc kubenswrapper[4844]: I0126 13:18:57.709800 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:18:57 crc kubenswrapper[4844]: I0126 13:18:57.783504 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59b54f4c7-pjjl5"] Jan 26 13:18:57 crc kubenswrapper[4844]: I0126 13:18:57.783802 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" podUID="077f8c48-ae97-4f5d-89db-1ed90de5e904" containerName="dnsmasq-dns" containerID="cri-o://d0131719df27005692e12b5c8786405ee2c17dc6fbe73ffc93404d227ca982ae" gracePeriod=10 Jan 26 13:18:58 crc kubenswrapper[4844]: I0126 13:18:58.717373 4844 generic.go:334] "Generic (PLEG): container finished" podID="077f8c48-ae97-4f5d-89db-1ed90de5e904" containerID="d0131719df27005692e12b5c8786405ee2c17dc6fbe73ffc93404d227ca982ae" exitCode=0 Jan 26 13:18:58 crc kubenswrapper[4844]: I0126 13:18:58.717438 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" event={"ID":"077f8c48-ae97-4f5d-89db-1ed90de5e904","Type":"ContainerDied","Data":"d0131719df27005692e12b5c8786405ee2c17dc6fbe73ffc93404d227ca982ae"} Jan 26 13:19:05 crc kubenswrapper[4844]: I0126 13:19:05.449124 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" podUID="077f8c48-ae97-4f5d-89db-1ed90de5e904" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: i/o timeout" Jan 26 13:19:10 crc kubenswrapper[4844]: I0126 13:19:10.450882 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" podUID="077f8c48-ae97-4f5d-89db-1ed90de5e904" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: i/o timeout" Jan 26 13:19:15 crc kubenswrapper[4844]: I0126 13:19:15.452518 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" podUID="077f8c48-ae97-4f5d-89db-1ed90de5e904" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: i/o timeout" Jan 26 13:19:15 crc kubenswrapper[4844]: I0126 13:19:15.454336 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:19:15 crc kubenswrapper[4844]: E0126 13:19:15.988138 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 13:19:15 crc kubenswrapper[4844]: E0126 13:19:15.988174 4844 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 13:19:15 crc kubenswrapper[4844]: E0126 13:19:15.988323 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.9:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc9h89h688h5fbh569h658h664h668h5ffh6h549h668h5c6h75h68dh68bh55ch7ch64ch66ch64bh5d5hbdh5bch66h55h76h64h74h5b4h564h64dq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdkf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6775fbb8bf-p89r6_openstack(be9958f1-c7db-4c90-9f58-7dee7e86e728): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:19:16 crc kubenswrapper[4844]: E0126 13:19:16.004456 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-6775fbb8bf-p89r6" podUID="be9958f1-c7db-4c90-9f58-7dee7e86e728" Jan 26 13:19:16 crc kubenswrapper[4844]: E0126 13:19:16.006951 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 13:19:16 crc kubenswrapper[4844]: E0126 13:19:16.006996 4844 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 13:19:16 crc kubenswrapper[4844]: E0126 13:19:16.007098 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.9:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59h687h699h5c5hd8h585h664h64fh595h68dh5bdhf8h549h699hf7h64dh65bhfch669h648h5cfh5cbh594hc7h58h54dh576h5c6h547h8h649h78q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4w9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7969695f59-rzz64_openstack(1979816f-0e1c-427a-b6aa-97b147a4c622): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:19:16 crc kubenswrapper[4844]: E0126 13:19:16.009323 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-7969695f59-rzz64" podUID="1979816f-0e1c-427a-b6aa-97b147a4c622" Jan 26 13:19:16 crc kubenswrapper[4844]: E0126 13:19:16.030690 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 13:19:16 crc kubenswrapper[4844]: E0126 13:19:16.030735 4844 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 13:19:16 crc kubenswrapper[4844]: E0126 13:19:16.030843 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.9:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h67ch5c4h565hdh57ch5fch8fhfbh554h668h56fh649h666h64bhb6h688h544h5c8h665h59fh66bhfh65hc9h595h66fh647h5b8hc8hd5h689q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f66g9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-558494644f-wks4g_openstack(8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:19:16 crc kubenswrapper[4844]: E0126 13:19:16.033882 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-558494644f-wks4g" podUID="8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4" Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.078680 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.151500 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-ovsdbserver-nb\") pod \"077f8c48-ae97-4f5d-89db-1ed90de5e904\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.151636 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-dns-svc\") pod \"077f8c48-ae97-4f5d-89db-1ed90de5e904\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.151698 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-dns-swift-storage-0\") pod \"077f8c48-ae97-4f5d-89db-1ed90de5e904\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.151908 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-config\") pod \"077f8c48-ae97-4f5d-89db-1ed90de5e904\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.151986 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh5fn\" (UniqueName: \"kubernetes.io/projected/077f8c48-ae97-4f5d-89db-1ed90de5e904-kube-api-access-bh5fn\") pod \"077f8c48-ae97-4f5d-89db-1ed90de5e904\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.152114 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-ovsdbserver-sb\") pod \"077f8c48-ae97-4f5d-89db-1ed90de5e904\" (UID: \"077f8c48-ae97-4f5d-89db-1ed90de5e904\") " Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.159011 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/077f8c48-ae97-4f5d-89db-1ed90de5e904-kube-api-access-bh5fn" (OuterVolumeSpecName: "kube-api-access-bh5fn") pod "077f8c48-ae97-4f5d-89db-1ed90de5e904" (UID: "077f8c48-ae97-4f5d-89db-1ed90de5e904"). InnerVolumeSpecName "kube-api-access-bh5fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.199485 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-config" (OuterVolumeSpecName: "config") pod "077f8c48-ae97-4f5d-89db-1ed90de5e904" (UID: "077f8c48-ae97-4f5d-89db-1ed90de5e904"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.203488 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "077f8c48-ae97-4f5d-89db-1ed90de5e904" (UID: "077f8c48-ae97-4f5d-89db-1ed90de5e904"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.205166 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "077f8c48-ae97-4f5d-89db-1ed90de5e904" (UID: "077f8c48-ae97-4f5d-89db-1ed90de5e904"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.207719 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "077f8c48-ae97-4f5d-89db-1ed90de5e904" (UID: "077f8c48-ae97-4f5d-89db-1ed90de5e904"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.224532 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "077f8c48-ae97-4f5d-89db-1ed90de5e904" (UID: "077f8c48-ae97-4f5d-89db-1ed90de5e904"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.255944 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.255976 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.255991 4844 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.256003 4844 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.256015 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/077f8c48-ae97-4f5d-89db-1ed90de5e904-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.256027 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh5fn\" (UniqueName: \"kubernetes.io/projected/077f8c48-ae97-4f5d-89db-1ed90de5e904-kube-api-access-bh5fn\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.920723 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" event={"ID":"077f8c48-ae97-4f5d-89db-1ed90de5e904","Type":"ContainerDied","Data":"46a4ee6551ee2cfc8afd409d85a32621acd2e8401184f65b64e6dd38f8f1e36c"} Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.920740 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.920805 4844 scope.go:117] "RemoveContainer" containerID="d0131719df27005692e12b5c8786405ee2c17dc6fbe73ffc93404d227ca982ae" Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.934518 4844 generic.go:334] "Generic (PLEG): container finished" podID="cbd86931-9c64-42e8-911a-f0a8044098c4" containerID="2e9a84ce2b53137dcc0b605e1c8934f3ec81c8d1af469de9901b79a1914dbeb8" exitCode=0 Jan 26 13:19:16 crc kubenswrapper[4844]: I0126 13:19:16.934568 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7j972" event={"ID":"cbd86931-9c64-42e8-911a-f0a8044098c4","Type":"ContainerDied","Data":"2e9a84ce2b53137dcc0b605e1c8934f3ec81c8d1af469de9901b79a1914dbeb8"} Jan 26 13:19:17 crc kubenswrapper[4844]: I0126 13:19:17.089315 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59b54f4c7-pjjl5"] Jan 26 13:19:17 crc kubenswrapper[4844]: I0126 13:19:17.097730 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59b54f4c7-pjjl5"] Jan 26 13:19:17 crc kubenswrapper[4844]: I0126 13:19:17.332126 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="077f8c48-ae97-4f5d-89db-1ed90de5e904" path="/var/lib/kubelet/pods/077f8c48-ae97-4f5d-89db-1ed90de5e904/volumes" Jan 26 13:19:17 crc kubenswrapper[4844]: I0126 13:19:17.945357 4844 generic.go:334] "Generic (PLEG): container finished" podID="db436f05-9b6d-4342-82d0-524c18fe6079" containerID="b01bde1b77e6b4012bd36c236ff5cf164902b763ff25a61357efefa4c71f214c" exitCode=0 Jan 26 13:19:17 crc kubenswrapper[4844]: I0126 13:19:17.945403 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-5w9q7" event={"ID":"db436f05-9b6d-4342-82d0-524c18fe6079","Type":"ContainerDied","Data":"b01bde1b77e6b4012bd36c236ff5cf164902b763ff25a61357efefa4c71f214c"} Jan 26 13:19:18 crc kubenswrapper[4844]: E0126 13:19:18.238168 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 26 13:19:18 crc kubenswrapper[4844]: E0126 13:19:18.238232 4844 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 26 13:19:18 crc kubenswrapper[4844]: E0126 13:19:18.238342 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.9:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjs6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-2xnzf_openstack(43fe5130-0714-4f40-9d6a-9384eb72fa0a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:19:18 crc kubenswrapper[4844]: E0126 13:19:18.239772 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-2xnzf" podUID="43fe5130-0714-4f40-9d6a-9384eb72fa0a" Jan 26 13:19:18 crc kubenswrapper[4844]: E0126 13:19:18.957053 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-2xnzf" podUID="43fe5130-0714-4f40-9d6a-9384eb72fa0a" Jan 26 13:19:20 crc kubenswrapper[4844]: I0126 13:19:20.453789 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59b54f4c7-pjjl5" podUID="077f8c48-ae97-4f5d-89db-1ed90de5e904" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: i/o timeout" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.247298 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.254030 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-558494644f-wks4g" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.290652 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7j972" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.291730 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-scripts\") pod \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.291886 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-horizon-secret-key\") pod \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.291920 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1979816f-0e1c-427a-b6aa-97b147a4c622-scripts\") pod \"1979816f-0e1c-427a-b6aa-97b147a4c622\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.291940 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4w9f\" (UniqueName: \"kubernetes.io/projected/1979816f-0e1c-427a-b6aa-97b147a4c622-kube-api-access-s4w9f\") pod \"1979816f-0e1c-427a-b6aa-97b147a4c622\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.292021 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-logs\") pod \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.292306 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-scripts" (OuterVolumeSpecName: "scripts") pod "8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4" (UID: "8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.292669 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1979816f-0e1c-427a-b6aa-97b147a4c622-scripts" (OuterVolumeSpecName: "scripts") pod "1979816f-0e1c-427a-b6aa-97b147a4c622" (UID: "1979816f-0e1c-427a-b6aa-97b147a4c622"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.292735 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1979816f-0e1c-427a-b6aa-97b147a4c622-config-data\") pod \"1979816f-0e1c-427a-b6aa-97b147a4c622\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.292710 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-logs" (OuterVolumeSpecName: "logs") pod "8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4" (UID: "8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.292791 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1979816f-0e1c-427a-b6aa-97b147a4c622-logs\") pod \"1979816f-0e1c-427a-b6aa-97b147a4c622\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.292810 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-config-data\") pod \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.292849 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f66g9\" (UniqueName: \"kubernetes.io/projected/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-kube-api-access-f66g9\") pod \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\" (UID: \"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.292918 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1979816f-0e1c-427a-b6aa-97b147a4c622-horizon-secret-key\") pod \"1979816f-0e1c-427a-b6aa-97b147a4c622\" (UID: \"1979816f-0e1c-427a-b6aa-97b147a4c622\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.293132 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1979816f-0e1c-427a-b6aa-97b147a4c622-logs" (OuterVolumeSpecName: "logs") pod "1979816f-0e1c-427a-b6aa-97b147a4c622" (UID: "1979816f-0e1c-427a-b6aa-97b147a4c622"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.293320 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1979816f-0e1c-427a-b6aa-97b147a4c622-config-data" (OuterVolumeSpecName: "config-data") pod "1979816f-0e1c-427a-b6aa-97b147a4c622" (UID: "1979816f-0e1c-427a-b6aa-97b147a4c622"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.293521 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-config-data" (OuterVolumeSpecName: "config-data") pod "8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4" (UID: "8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.293912 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1979816f-0e1c-427a-b6aa-97b147a4c622-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.293931 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.293940 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1979816f-0e1c-427a-b6aa-97b147a4c622-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.293949 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1979816f-0e1c-427a-b6aa-97b147a4c622-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.293957 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.293965 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.294560 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.300158 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4" (UID: "8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.301290 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1979816f-0e1c-427a-b6aa-97b147a4c622-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1979816f-0e1c-427a-b6aa-97b147a4c622" (UID: "1979816f-0e1c-427a-b6aa-97b147a4c622"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.301316 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-kube-api-access-f66g9" (OuterVolumeSpecName: "kube-api-access-f66g9") pod "8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4" (UID: "8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4"). InnerVolumeSpecName "kube-api-access-f66g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.317989 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1979816f-0e1c-427a-b6aa-97b147a4c622-kube-api-access-s4w9f" (OuterVolumeSpecName: "kube-api-access-s4w9f") pod "1979816f-0e1c-427a-b6aa-97b147a4c622" (UID: "1979816f-0e1c-427a-b6aa-97b147a4c622"). InnerVolumeSpecName "kube-api-access-s4w9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.394695 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78gmn\" (UniqueName: \"kubernetes.io/projected/cbd86931-9c64-42e8-911a-f0a8044098c4-kube-api-access-78gmn\") pod \"cbd86931-9c64-42e8-911a-f0a8044098c4\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.394769 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be9958f1-c7db-4c90-9f58-7dee7e86e728-scripts\") pod \"be9958f1-c7db-4c90-9f58-7dee7e86e728\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.394817 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdkf2\" (UniqueName: \"kubernetes.io/projected/be9958f1-c7db-4c90-9f58-7dee7e86e728-kube-api-access-kdkf2\") pod \"be9958f1-c7db-4c90-9f58-7dee7e86e728\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.394843 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-config-data\") pod \"cbd86931-9c64-42e8-911a-f0a8044098c4\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.394889 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be9958f1-c7db-4c90-9f58-7dee7e86e728-logs\") pod \"be9958f1-c7db-4c90-9f58-7dee7e86e728\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.394941 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be9958f1-c7db-4c90-9f58-7dee7e86e728-horizon-secret-key\") pod \"be9958f1-c7db-4c90-9f58-7dee7e86e728\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.395026 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-fernet-keys\") pod \"cbd86931-9c64-42e8-911a-f0a8044098c4\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.395049 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-credential-keys\") pod \"cbd86931-9c64-42e8-911a-f0a8044098c4\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.395075 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-scripts\") pod \"cbd86931-9c64-42e8-911a-f0a8044098c4\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.395098 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-combined-ca-bundle\") pod \"cbd86931-9c64-42e8-911a-f0a8044098c4\" (UID: \"cbd86931-9c64-42e8-911a-f0a8044098c4\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.395134 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be9958f1-c7db-4c90-9f58-7dee7e86e728-config-data\") pod \"be9958f1-c7db-4c90-9f58-7dee7e86e728\" (UID: \"be9958f1-c7db-4c90-9f58-7dee7e86e728\") " Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.395510 4844 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1979816f-0e1c-427a-b6aa-97b147a4c622-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.395526 4844 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.395535 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4w9f\" (UniqueName: \"kubernetes.io/projected/1979816f-0e1c-427a-b6aa-97b147a4c622-kube-api-access-s4w9f\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.395545 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f66g9\" (UniqueName: \"kubernetes.io/projected/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4-kube-api-access-f66g9\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.396822 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be9958f1-c7db-4c90-9f58-7dee7e86e728-scripts" (OuterVolumeSpecName: "scripts") pod "be9958f1-c7db-4c90-9f58-7dee7e86e728" (UID: "be9958f1-c7db-4c90-9f58-7dee7e86e728"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.397043 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be9958f1-c7db-4c90-9f58-7dee7e86e728-config-data" (OuterVolumeSpecName: "config-data") pod "be9958f1-c7db-4c90-9f58-7dee7e86e728" (UID: "be9958f1-c7db-4c90-9f58-7dee7e86e728"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.397477 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be9958f1-c7db-4c90-9f58-7dee7e86e728-logs" (OuterVolumeSpecName: "logs") pod "be9958f1-c7db-4c90-9f58-7dee7e86e728" (UID: "be9958f1-c7db-4c90-9f58-7dee7e86e728"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.400286 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cbd86931-9c64-42e8-911a-f0a8044098c4" (UID: "cbd86931-9c64-42e8-911a-f0a8044098c4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.401077 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd86931-9c64-42e8-911a-f0a8044098c4-kube-api-access-78gmn" (OuterVolumeSpecName: "kube-api-access-78gmn") pod "cbd86931-9c64-42e8-911a-f0a8044098c4" (UID: "cbd86931-9c64-42e8-911a-f0a8044098c4"). InnerVolumeSpecName "kube-api-access-78gmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.401367 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-scripts" (OuterVolumeSpecName: "scripts") pod "cbd86931-9c64-42e8-911a-f0a8044098c4" (UID: "cbd86931-9c64-42e8-911a-f0a8044098c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.401588 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be9958f1-c7db-4c90-9f58-7dee7e86e728-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "be9958f1-c7db-4c90-9f58-7dee7e86e728" (UID: "be9958f1-c7db-4c90-9f58-7dee7e86e728"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.402512 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be9958f1-c7db-4c90-9f58-7dee7e86e728-kube-api-access-kdkf2" (OuterVolumeSpecName: "kube-api-access-kdkf2") pod "be9958f1-c7db-4c90-9f58-7dee7e86e728" (UID: "be9958f1-c7db-4c90-9f58-7dee7e86e728"). InnerVolumeSpecName "kube-api-access-kdkf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.403077 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "cbd86931-9c64-42e8-911a-f0a8044098c4" (UID: "cbd86931-9c64-42e8-911a-f0a8044098c4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.430278 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-config-data" (OuterVolumeSpecName: "config-data") pod "cbd86931-9c64-42e8-911a-f0a8044098c4" (UID: "cbd86931-9c64-42e8-911a-f0a8044098c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.431225 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbd86931-9c64-42e8-911a-f0a8044098c4" (UID: "cbd86931-9c64-42e8-911a-f0a8044098c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.498386 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdkf2\" (UniqueName: \"kubernetes.io/projected/be9958f1-c7db-4c90-9f58-7dee7e86e728-kube-api-access-kdkf2\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.498803 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.498818 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be9958f1-c7db-4c90-9f58-7dee7e86e728-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.498829 4844 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be9958f1-c7db-4c90-9f58-7dee7e86e728-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.498839 4844 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.498851 4844 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.498861 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.498871 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd86931-9c64-42e8-911a-f0a8044098c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.498880 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be9958f1-c7db-4c90-9f58-7dee7e86e728-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.498889 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78gmn\" (UniqueName: \"kubernetes.io/projected/cbd86931-9c64-42e8-911a-f0a8044098c4-kube-api-access-78gmn\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.498899 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be9958f1-c7db-4c90-9f58-7dee7e86e728-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:29 crc kubenswrapper[4844]: I0126 13:19:29.616287 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f984df9c6-m8lct"] Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.068813 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-558494644f-wks4g" event={"ID":"8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4","Type":"ContainerDied","Data":"24bf12647a19506ff75a37e962e9e6faa13331152b5b622c8f521a1bbe901549"} Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.068879 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-558494644f-wks4g" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.072207 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7j972" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.072198 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7j972" event={"ID":"cbd86931-9c64-42e8-911a-f0a8044098c4","Type":"ContainerDied","Data":"b51ef72111288b2931958657b96e8f5164e3df4f2535eb6aa293108deb84e3f3"} Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.072417 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b51ef72111288b2931958657b96e8f5164e3df4f2535eb6aa293108deb84e3f3" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.073792 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6775fbb8bf-p89r6" event={"ID":"be9958f1-c7db-4c90-9f58-7dee7e86e728","Type":"ContainerDied","Data":"2e5d5a38f9514185a659ea55fd2065500346f4aed01a27c51c919cd93e76608b"} Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.073887 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6775fbb8bf-p89r6" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.075165 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7969695f59-rzz64" event={"ID":"1979816f-0e1c-427a-b6aa-97b147a4c622","Type":"ContainerDied","Data":"7cdfffd9d1334cf918aa5aa012b23c0848139fe82c289cd9481197f9ab151f3f"} Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.075276 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7969695f59-rzz64" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.131868 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-558494644f-wks4g"] Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.144145 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-558494644f-wks4g"] Jan 26 13:19:30 crc kubenswrapper[4844]: E0126 13:19:30.186325 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 26 13:19:30 crc kubenswrapper[4844]: E0126 13:19:30.186387 4844 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 26 13:19:30 crc kubenswrapper[4844]: E0126 13:19:30.186520 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.9:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66fh658h8dh6dh8chb4h666h6dh58fhddh5bfhc4h549h5d7h65bh554h66dh58bhfchf6h57h5fh65hf8h658h57fhcch689h549h9hc8h67cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jjpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ad438e4d-9282-48b8-88c1-1f974bb26b5e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.186991 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6775fbb8bf-p89r6"] Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.194264 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6775fbb8bf-p89r6"] Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.225203 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7969695f59-rzz64"] Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.231913 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7969695f59-rzz64"] Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.277165 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-5w9q7" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.312720 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db436f05-9b6d-4342-82d0-524c18fe6079-config-data\") pod \"db436f05-9b6d-4342-82d0-524c18fe6079\" (UID: \"db436f05-9b6d-4342-82d0-524c18fe6079\") " Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.312842 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db436f05-9b6d-4342-82d0-524c18fe6079-combined-ca-bundle\") pod \"db436f05-9b6d-4342-82d0-524c18fe6079\" (UID: \"db436f05-9b6d-4342-82d0-524c18fe6079\") " Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.312917 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db436f05-9b6d-4342-82d0-524c18fe6079-db-sync-config-data\") pod \"db436f05-9b6d-4342-82d0-524c18fe6079\" (UID: \"db436f05-9b6d-4342-82d0-524c18fe6079\") " Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.313014 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkdsv\" (UniqueName: \"kubernetes.io/projected/db436f05-9b6d-4342-82d0-524c18fe6079-kube-api-access-nkdsv\") pod \"db436f05-9b6d-4342-82d0-524c18fe6079\" (UID: \"db436f05-9b6d-4342-82d0-524c18fe6079\") " Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.318360 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db436f05-9b6d-4342-82d0-524c18fe6079-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "db436f05-9b6d-4342-82d0-524c18fe6079" (UID: "db436f05-9b6d-4342-82d0-524c18fe6079"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.333783 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db436f05-9b6d-4342-82d0-524c18fe6079-kube-api-access-nkdsv" (OuterVolumeSpecName: "kube-api-access-nkdsv") pod "db436f05-9b6d-4342-82d0-524c18fe6079" (UID: "db436f05-9b6d-4342-82d0-524c18fe6079"). InnerVolumeSpecName "kube-api-access-nkdsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.342176 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db436f05-9b6d-4342-82d0-524c18fe6079-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db436f05-9b6d-4342-82d0-524c18fe6079" (UID: "db436f05-9b6d-4342-82d0-524c18fe6079"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.387306 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db436f05-9b6d-4342-82d0-524c18fe6079-config-data" (OuterVolumeSpecName: "config-data") pod "db436f05-9b6d-4342-82d0-524c18fe6079" (UID: "db436f05-9b6d-4342-82d0-524c18fe6079"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.414276 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7j972"] Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.415176 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db436f05-9b6d-4342-82d0-524c18fe6079-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.415213 4844 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db436f05-9b6d-4342-82d0-524c18fe6079-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.415224 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkdsv\" (UniqueName: \"kubernetes.io/projected/db436f05-9b6d-4342-82d0-524c18fe6079-kube-api-access-nkdsv\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.415237 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db436f05-9b6d-4342-82d0-524c18fe6079-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.423824 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7j972"] Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.536335 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ln4pq"] Jan 26 13:19:30 crc kubenswrapper[4844]: E0126 13:19:30.536760 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db436f05-9b6d-4342-82d0-524c18fe6079" containerName="watcher-db-sync" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.536777 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="db436f05-9b6d-4342-82d0-524c18fe6079" containerName="watcher-db-sync" Jan 26 13:19:30 crc kubenswrapper[4844]: E0126 13:19:30.536788 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd86931-9c64-42e8-911a-f0a8044098c4" containerName="keystone-bootstrap" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.536795 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd86931-9c64-42e8-911a-f0a8044098c4" containerName="keystone-bootstrap" Jan 26 13:19:30 crc kubenswrapper[4844]: E0126 13:19:30.536805 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077f8c48-ae97-4f5d-89db-1ed90de5e904" containerName="dnsmasq-dns" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.536813 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="077f8c48-ae97-4f5d-89db-1ed90de5e904" containerName="dnsmasq-dns" Jan 26 13:19:30 crc kubenswrapper[4844]: E0126 13:19:30.536829 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077f8c48-ae97-4f5d-89db-1ed90de5e904" containerName="init" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.536836 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="077f8c48-ae97-4f5d-89db-1ed90de5e904" containerName="init" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.536988 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="db436f05-9b6d-4342-82d0-524c18fe6079" containerName="watcher-db-sync" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.537013 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="077f8c48-ae97-4f5d-89db-1ed90de5e904" containerName="dnsmasq-dns" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.537028 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd86931-9c64-42e8-911a-f0a8044098c4" containerName="keystone-bootstrap" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.537623 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.541004 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.541008 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.541420 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l6kd4" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.541715 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.542009 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.556286 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ln4pq"] Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.617621 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-combined-ca-bundle\") pod \"keystone-bootstrap-ln4pq\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.617670 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-config-data\") pod \"keystone-bootstrap-ln4pq\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.617690 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-fernet-keys\") pod \"keystone-bootstrap-ln4pq\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.617766 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dzz9\" (UniqueName: \"kubernetes.io/projected/ef403703-395e-4db1-a9f5-a8e011e39ff2-kube-api-access-6dzz9\") pod \"keystone-bootstrap-ln4pq\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.618104 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-scripts\") pod \"keystone-bootstrap-ln4pq\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.618174 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-credential-keys\") pod \"keystone-bootstrap-ln4pq\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.720463 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-combined-ca-bundle\") pod \"keystone-bootstrap-ln4pq\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.720842 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-config-data\") pod \"keystone-bootstrap-ln4pq\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.720872 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-fernet-keys\") pod \"keystone-bootstrap-ln4pq\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.720981 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dzz9\" (UniqueName: \"kubernetes.io/projected/ef403703-395e-4db1-a9f5-a8e011e39ff2-kube-api-access-6dzz9\") pod \"keystone-bootstrap-ln4pq\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.721055 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-scripts\") pod \"keystone-bootstrap-ln4pq\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.721087 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-credential-keys\") pod \"keystone-bootstrap-ln4pq\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.725880 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-config-data\") pod \"keystone-bootstrap-ln4pq\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.726420 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-credential-keys\") pod \"keystone-bootstrap-ln4pq\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.727318 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-fernet-keys\") pod \"keystone-bootstrap-ln4pq\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.728254 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-scripts\") pod \"keystone-bootstrap-ln4pq\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.728417 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-combined-ca-bundle\") pod \"keystone-bootstrap-ln4pq\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.744658 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dzz9\" (UniqueName: \"kubernetes.io/projected/ef403703-395e-4db1-a9f5-a8e011e39ff2-kube-api-access-6dzz9\") pod \"keystone-bootstrap-ln4pq\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:30 crc kubenswrapper[4844]: I0126 13:19:30.860294 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.091284 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-5w9q7" event={"ID":"db436f05-9b6d-4342-82d0-524c18fe6079","Type":"ContainerDied","Data":"708cb1ab377da8806c4a729c7906a563bc846e0c5169aed8f6891cec2ccaada2"} Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.091326 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="708cb1ab377da8806c4a729c7906a563bc846e0c5169aed8f6891cec2ccaada2" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.091405 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-5w9q7" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.333989 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1979816f-0e1c-427a-b6aa-97b147a4c622" path="/var/lib/kubelet/pods/1979816f-0e1c-427a-b6aa-97b147a4c622/volumes" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.335704 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4" path="/var/lib/kubelet/pods/8cd1a1b9-d504-45dc-a3d1-5b1c3bed85a4/volumes" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.336949 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be9958f1-c7db-4c90-9f58-7dee7e86e728" path="/var/lib/kubelet/pods/be9958f1-c7db-4c90-9f58-7dee7e86e728/volumes" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.337812 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd86931-9c64-42e8-911a-f0a8044098c4" path="/var/lib/kubelet/pods/cbd86931-9c64-42e8-911a-f0a8044098c4/volumes" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.557495 4844 scope.go:117] "RemoveContainer" containerID="9d31016880287673cc6d24cb62a2939b55257378c538d6d96ec095337ec487a6" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.564235 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.565706 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.568853 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-gbbb6" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.568869 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 26 13:19:31 crc kubenswrapper[4844]: W0126 13:19:31.570008 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f336c66_c9c1_4764_8f55_a6fd70f01790.slice/crio-8cbeeabeda98d6efd19df33bdbcb67b60c23ab160c94c8324901cc866386fc92 WatchSource:0}: Error finding container 8cbeeabeda98d6efd19df33bdbcb67b60c23ab160c94c8324901cc866386fc92: Status 404 returned error can't find the container with id 8cbeeabeda98d6efd19df33bdbcb67b60c23ab160c94c8324901cc866386fc92 Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.583318 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.656789 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed782618-8b69-4456-9aec-5184e765960f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.656887 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ed782618-8b69-4456-9aec-5184e765960f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.656944 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d849\" (UniqueName: \"kubernetes.io/projected/ed782618-8b69-4456-9aec-5184e765960f-kube-api-access-8d849\") pod \"watcher-decision-engine-0\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.656980 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed782618-8b69-4456-9aec-5184e765960f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.657087 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed782618-8b69-4456-9aec-5184e765960f-logs\") pod \"watcher-decision-engine-0\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.666671 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.668133 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.688018 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.694645 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.707482 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.708693 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.712636 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 26 13:19:31 crc kubenswrapper[4844]: E0126 13:19:31.733208 4844 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 26 13:19:31 crc kubenswrapper[4844]: E0126 13:19:31.733400 4844 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 26 13:19:31 crc kubenswrapper[4844]: E0126 13:19:31.733494 4844 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.9:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4pp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-dcfgm_openstack(5f82260f-cde4-4197-8718-d7adebadeddb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 13:19:31 crc kubenswrapper[4844]: E0126 13:19:31.739341 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-dcfgm" podUID="5f82260f-cde4-4197-8718-d7adebadeddb" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.759066 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed782618-8b69-4456-9aec-5184e765960f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.759283 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75853a49-c21a-4df8-bcdf-0b160524e203-logs\") pod \"watcher-applier-0\" (UID: \"75853a49-c21a-4df8-bcdf-0b160524e203\") " pod="openstack/watcher-applier-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.759365 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ed782618-8b69-4456-9aec-5184e765960f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.759467 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75853a49-c21a-4df8-bcdf-0b160524e203-config-data\") pod \"watcher-applier-0\" (UID: \"75853a49-c21a-4df8-bcdf-0b160524e203\") " pod="openstack/watcher-applier-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.759550 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3764b649-1758-4f78-83b5-8a13118c9bc9-config-data\") pod \"watcher-api-0\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " pod="openstack/watcher-api-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.759630 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75853a49-c21a-4df8-bcdf-0b160524e203-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"75853a49-c21a-4df8-bcdf-0b160524e203\") " pod="openstack/watcher-applier-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.759700 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d849\" (UniqueName: \"kubernetes.io/projected/ed782618-8b69-4456-9aec-5184e765960f-kube-api-access-8d849\") pod \"watcher-decision-engine-0\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.759770 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed782618-8b69-4456-9aec-5184e765960f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.759875 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3764b649-1758-4f78-83b5-8a13118c9bc9-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " pod="openstack/watcher-api-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.759947 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq5sk\" (UniqueName: \"kubernetes.io/projected/75853a49-c21a-4df8-bcdf-0b160524e203-kube-api-access-nq5sk\") pod \"watcher-applier-0\" (UID: \"75853a49-c21a-4df8-bcdf-0b160524e203\") " pod="openstack/watcher-applier-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.760018 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttsgv\" (UniqueName: \"kubernetes.io/projected/3764b649-1758-4f78-83b5-8a13118c9bc9-kube-api-access-ttsgv\") pod \"watcher-api-0\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " pod="openstack/watcher-api-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.760089 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed782618-8b69-4456-9aec-5184e765960f-logs\") pod \"watcher-decision-engine-0\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.760160 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3764b649-1758-4f78-83b5-8a13118c9bc9-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " pod="openstack/watcher-api-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.760246 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3764b649-1758-4f78-83b5-8a13118c9bc9-logs\") pod \"watcher-api-0\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " pod="openstack/watcher-api-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.764269 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed782618-8b69-4456-9aec-5184e765960f-logs\") pod \"watcher-decision-engine-0\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.766936 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed782618-8b69-4456-9aec-5184e765960f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.769846 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed782618-8b69-4456-9aec-5184e765960f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.776455 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.778840 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ed782618-8b69-4456-9aec-5184e765960f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.790117 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d849\" (UniqueName: \"kubernetes.io/projected/ed782618-8b69-4456-9aec-5184e765960f-kube-api-access-8d849\") pod \"watcher-decision-engine-0\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.861762 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75853a49-c21a-4df8-bcdf-0b160524e203-logs\") pod \"watcher-applier-0\" (UID: \"75853a49-c21a-4df8-bcdf-0b160524e203\") " pod="openstack/watcher-applier-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.861821 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75853a49-c21a-4df8-bcdf-0b160524e203-config-data\") pod \"watcher-applier-0\" (UID: \"75853a49-c21a-4df8-bcdf-0b160524e203\") " pod="openstack/watcher-applier-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.861853 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3764b649-1758-4f78-83b5-8a13118c9bc9-config-data\") pod \"watcher-api-0\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " pod="openstack/watcher-api-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.861870 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75853a49-c21a-4df8-bcdf-0b160524e203-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"75853a49-c21a-4df8-bcdf-0b160524e203\") " pod="openstack/watcher-applier-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.861922 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3764b649-1758-4f78-83b5-8a13118c9bc9-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " pod="openstack/watcher-api-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.861940 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq5sk\" (UniqueName: \"kubernetes.io/projected/75853a49-c21a-4df8-bcdf-0b160524e203-kube-api-access-nq5sk\") pod \"watcher-applier-0\" (UID: \"75853a49-c21a-4df8-bcdf-0b160524e203\") " pod="openstack/watcher-applier-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.861962 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttsgv\" (UniqueName: \"kubernetes.io/projected/3764b649-1758-4f78-83b5-8a13118c9bc9-kube-api-access-ttsgv\") pod \"watcher-api-0\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " pod="openstack/watcher-api-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.861991 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3764b649-1758-4f78-83b5-8a13118c9bc9-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " pod="openstack/watcher-api-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.862026 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3764b649-1758-4f78-83b5-8a13118c9bc9-logs\") pod \"watcher-api-0\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " pod="openstack/watcher-api-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.862394 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3764b649-1758-4f78-83b5-8a13118c9bc9-logs\") pod \"watcher-api-0\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " pod="openstack/watcher-api-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.862676 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75853a49-c21a-4df8-bcdf-0b160524e203-logs\") pod \"watcher-applier-0\" (UID: \"75853a49-c21a-4df8-bcdf-0b160524e203\") " pod="openstack/watcher-applier-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.866043 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75853a49-c21a-4df8-bcdf-0b160524e203-config-data\") pod \"watcher-applier-0\" (UID: \"75853a49-c21a-4df8-bcdf-0b160524e203\") " pod="openstack/watcher-applier-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.868234 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3764b649-1758-4f78-83b5-8a13118c9bc9-config-data\") pod \"watcher-api-0\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " pod="openstack/watcher-api-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.868407 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3764b649-1758-4f78-83b5-8a13118c9bc9-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " pod="openstack/watcher-api-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.868853 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75853a49-c21a-4df8-bcdf-0b160524e203-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"75853a49-c21a-4df8-bcdf-0b160524e203\") " pod="openstack/watcher-applier-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.869322 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3764b649-1758-4f78-83b5-8a13118c9bc9-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " pod="openstack/watcher-api-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.880988 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttsgv\" (UniqueName: \"kubernetes.io/projected/3764b649-1758-4f78-83b5-8a13118c9bc9-kube-api-access-ttsgv\") pod \"watcher-api-0\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " pod="openstack/watcher-api-0" Jan 26 13:19:31 crc kubenswrapper[4844]: I0126 13:19:31.888424 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq5sk\" (UniqueName: \"kubernetes.io/projected/75853a49-c21a-4df8-bcdf-0b160524e203-kube-api-access-nq5sk\") pod \"watcher-applier-0\" (UID: \"75853a49-c21a-4df8-bcdf-0b160524e203\") " pod="openstack/watcher-applier-0" Jan 26 13:19:32 crc kubenswrapper[4844]: I0126 13:19:32.015553 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 13:19:32 crc kubenswrapper[4844]: I0126 13:19:32.027740 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 13:19:32 crc kubenswrapper[4844]: I0126 13:19:32.034298 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 26 13:19:32 crc kubenswrapper[4844]: I0126 13:19:32.109280 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f984df9c6-m8lct" event={"ID":"2f336c66-c9c1-4764-8f55-a6fd70f01790","Type":"ContainerStarted","Data":"8cbeeabeda98d6efd19df33bdbcb67b60c23ab160c94c8324901cc866386fc92"} Jan 26 13:19:32 crc kubenswrapper[4844]: E0126 13:19:32.113386 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-dcfgm" podUID="5f82260f-cde4-4197-8718-d7adebadeddb" Jan 26 13:19:32 crc kubenswrapper[4844]: I0126 13:19:32.165674 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77c8bf8786-w82f7"] Jan 26 13:19:32 crc kubenswrapper[4844]: I0126 13:19:32.228610 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ln4pq"] Jan 26 13:19:32 crc kubenswrapper[4844]: I0126 13:19:32.641555 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 13:19:32 crc kubenswrapper[4844]: I0126 13:19:32.733399 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 26 13:19:32 crc kubenswrapper[4844]: I0126 13:19:32.752924 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 26 13:19:32 crc kubenswrapper[4844]: W0126 13:19:32.956798 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75853a49_c21a_4df8_bcdf_0b160524e203.slice/crio-697b2c49d963ddf101274001bae5ab9a6fd66b3bd6c60e65b58ef424de115988 WatchSource:0}: Error finding container 697b2c49d963ddf101274001bae5ab9a6fd66b3bd6c60e65b58ef424de115988: Status 404 returned error can't find the container with id 697b2c49d963ddf101274001bae5ab9a6fd66b3bd6c60e65b58ef424de115988 Jan 26 13:19:32 crc kubenswrapper[4844]: W0126 13:19:32.957523 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded782618_8b69_4456_9aec_5184e765960f.slice/crio-f6b2df85a64bb107e6bc87c6ada5f34f22972002638f7b5343530151b9f82742 WatchSource:0}: Error finding container f6b2df85a64bb107e6bc87c6ada5f34f22972002638f7b5343530151b9f82742: Status 404 returned error can't find the container with id f6b2df85a64bb107e6bc87c6ada5f34f22972002638f7b5343530151b9f82742 Jan 26 13:19:32 crc kubenswrapper[4844]: W0126 13:19:32.970505 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3764b649_1758_4f78_83b5_8a13118c9bc9.slice/crio-31c38c0623372ddcdbd48510b71d6a9ab644e2c9755a0a9a376acebbd08ed103 WatchSource:0}: Error finding container 31c38c0623372ddcdbd48510b71d6a9ab644e2c9755a0a9a376acebbd08ed103: Status 404 returned error can't find the container with id 31c38c0623372ddcdbd48510b71d6a9ab644e2c9755a0a9a376acebbd08ed103 Jan 26 13:19:33 crc kubenswrapper[4844]: I0126 13:19:33.155978 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9jq8s" event={"ID":"ce0ed764-c6f0-4580-89dd-4f6826df258d","Type":"ContainerStarted","Data":"61e9961bff931182a8012ad8856adbf430f38dc7f5ddea2b78bd38ec3bc96a2b"} Jan 26 13:19:33 crc kubenswrapper[4844]: I0126 13:19:33.205340 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bt68v" event={"ID":"847c2c6b-16a5-4c1d-9122-81accf513fb4","Type":"ContainerStarted","Data":"2949d309e80d3a15df54de2b1eef2a3f1d14c1d816a1ac2a78e45f1b801c0ae9"} Jan 26 13:19:33 crc kubenswrapper[4844]: I0126 13:19:33.210440 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3764b649-1758-4f78-83b5-8a13118c9bc9","Type":"ContainerStarted","Data":"31c38c0623372ddcdbd48510b71d6a9ab644e2c9755a0a9a376acebbd08ed103"} Jan 26 13:19:33 crc kubenswrapper[4844]: I0126 13:19:33.236755 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"75853a49-c21a-4df8-bcdf-0b160524e203","Type":"ContainerStarted","Data":"697b2c49d963ddf101274001bae5ab9a6fd66b3bd6c60e65b58ef424de115988"} Jan 26 13:19:33 crc kubenswrapper[4844]: I0126 13:19:33.237808 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ln4pq" event={"ID":"ef403703-395e-4db1-a9f5-a8e011e39ff2","Type":"ContainerStarted","Data":"a72406668fda692ce46feb733741e04f3739de3a206a53ab5a4f35df5ca1d220"} Jan 26 13:19:33 crc kubenswrapper[4844]: I0126 13:19:33.257932 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f984df9c6-m8lct" event={"ID":"2f336c66-c9c1-4764-8f55-a6fd70f01790","Type":"ContainerStarted","Data":"c6ebce027282a49648d65f221d8df430e516930ebe722a6821d99749d3838a00"} Jan 26 13:19:33 crc kubenswrapper[4844]: I0126 13:19:33.257999 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f984df9c6-m8lct" event={"ID":"2f336c66-c9c1-4764-8f55-a6fd70f01790","Type":"ContainerStarted","Data":"b4a28fc027238c2c642ef160a8fb190c22d5b2b5a5c62897b96d66146b947b9e"} Jan 26 13:19:33 crc kubenswrapper[4844]: I0126 13:19:33.260329 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77c8bf8786-w82f7" event={"ID":"a0edac82-6db3-481f-8c9e-8826b5aac863","Type":"ContainerStarted","Data":"19559d14208d6a56c390b835ea8fd05a8c3cf2fa3ee08a4872090bc4d62d111e"} Jan 26 13:19:33 crc kubenswrapper[4844]: I0126 13:19:33.260384 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77c8bf8786-w82f7" event={"ID":"a0edac82-6db3-481f-8c9e-8826b5aac863","Type":"ContainerStarted","Data":"33b33e86ec2f74875a2d8e2ffae7f3e794ec70a9838b2d6989c3212faf29676b"} Jan 26 13:19:33 crc kubenswrapper[4844]: I0126 13:19:33.262079 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2xnzf" event={"ID":"43fe5130-0714-4f40-9d6a-9384eb72fa0a","Type":"ContainerStarted","Data":"1b85fee309ae0e4dbc8b160f74806d6d702e7676b68d662560a47c021cd5f8a1"} Jan 26 13:19:33 crc kubenswrapper[4844]: I0126 13:19:33.267055 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-9jq8s" podStartSLOduration=7.226927032 podStartE2EDuration="1m10.267034829s" podCreationTimestamp="2026-01-26 13:18:23 +0000 UTC" firstStartedPulling="2026-01-26 13:18:28.551068116 +0000 UTC m=+2085.484435728" lastFinishedPulling="2026-01-26 13:19:31.591175923 +0000 UTC m=+2148.524543525" observedRunningTime="2026-01-26 13:19:33.201456613 +0000 UTC m=+2150.134824215" watchObservedRunningTime="2026-01-26 13:19:33.267034829 +0000 UTC m=+2150.200402431" Jan 26 13:19:33 crc kubenswrapper[4844]: I0126 13:19:33.268224 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ed782618-8b69-4456-9aec-5184e765960f","Type":"ContainerStarted","Data":"f6b2df85a64bb107e6bc87c6ada5f34f22972002638f7b5343530151b9f82742"} Jan 26 13:19:33 crc kubenswrapper[4844]: I0126 13:19:33.281584 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-bt68v" podStartSLOduration=6.39333876 podStartE2EDuration="47.2815597s" podCreationTimestamp="2026-01-26 13:18:46 +0000 UTC" firstStartedPulling="2026-01-26 13:18:48.239776084 +0000 UTC m=+2105.173143696" lastFinishedPulling="2026-01-26 13:19:29.127996974 +0000 UTC m=+2146.061364636" observedRunningTime="2026-01-26 13:19:33.250101949 +0000 UTC m=+2150.183469561" watchObservedRunningTime="2026-01-26 13:19:33.2815597 +0000 UTC m=+2150.214927312" Jan 26 13:19:33 crc kubenswrapper[4844]: I0126 13:19:33.336426 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-f984df9c6-m8lct" podStartSLOduration=38.026361361 podStartE2EDuration="38.336405216s" podCreationTimestamp="2026-01-26 13:18:55 +0000 UTC" firstStartedPulling="2026-01-26 13:19:31.657237611 +0000 UTC m=+2148.590605223" lastFinishedPulling="2026-01-26 13:19:31.967281466 +0000 UTC m=+2148.900649078" observedRunningTime="2026-01-26 13:19:33.32458298 +0000 UTC m=+2150.257950592" watchObservedRunningTime="2026-01-26 13:19:33.336405216 +0000 UTC m=+2150.269772828" Jan 26 13:19:33 crc kubenswrapper[4844]: I0126 13:19:33.357016 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-2xnzf" podStartSLOduration=3.363734616 podStartE2EDuration="47.356994264s" podCreationTimestamp="2026-01-26 13:18:46 +0000 UTC" firstStartedPulling="2026-01-26 13:18:47.976251453 +0000 UTC m=+2104.909619065" lastFinishedPulling="2026-01-26 13:19:31.969511101 +0000 UTC m=+2148.902878713" observedRunningTime="2026-01-26 13:19:33.353253843 +0000 UTC m=+2150.286621455" watchObservedRunningTime="2026-01-26 13:19:33.356994264 +0000 UTC m=+2150.290361866" Jan 26 13:19:35 crc kubenswrapper[4844]: I0126 13:19:35.912775 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:19:35 crc kubenswrapper[4844]: I0126 13:19:35.913479 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:19:37 crc kubenswrapper[4844]: I0126 13:19:37.310791 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77c8bf8786-w82f7" event={"ID":"a0edac82-6db3-481f-8c9e-8826b5aac863","Type":"ContainerStarted","Data":"928306fe6f896d3ecc18e91a7a85fd21d2fb768da1b2cf2b20be832f306a5dcf"} Jan 26 13:19:37 crc kubenswrapper[4844]: I0126 13:19:37.339934 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3764b649-1758-4f78-83b5-8a13118c9bc9","Type":"ContainerStarted","Data":"1bc285b92109c80e805ca30c245d2f348bb4aa73f399fb09fecd5e0fa5064ace"} Jan 26 13:19:37 crc kubenswrapper[4844]: I0126 13:19:37.340392 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad438e4d-9282-48b8-88c1-1f974bb26b5e","Type":"ContainerStarted","Data":"282ef0f047b2f4b694df966e27dbe553b91659664164f94cf8c45a10a3267d7f"} Jan 26 13:19:37 crc kubenswrapper[4844]: I0126 13:19:37.340420 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ln4pq" event={"ID":"ef403703-395e-4db1-a9f5-a8e011e39ff2","Type":"ContainerStarted","Data":"5077f0e26a12144f58d459cbf7f199370b10cdd16c8f8cfa2de83245276a6c35"} Jan 26 13:19:37 crc kubenswrapper[4844]: I0126 13:19:37.350918 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-77c8bf8786-w82f7" podStartSLOduration=42.350891949 podStartE2EDuration="42.350891949s" podCreationTimestamp="2026-01-26 13:18:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:19:37.343556092 +0000 UTC m=+2154.276923764" watchObservedRunningTime="2026-01-26 13:19:37.350891949 +0000 UTC m=+2154.284259581" Jan 26 13:19:37 crc kubenswrapper[4844]: I0126 13:19:37.372957 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ln4pq" podStartSLOduration=7.372936062 podStartE2EDuration="7.372936062s" podCreationTimestamp="2026-01-26 13:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:19:37.360662075 +0000 UTC m=+2154.294029717" watchObservedRunningTime="2026-01-26 13:19:37.372936062 +0000 UTC m=+2154.306303674" Jan 26 13:19:38 crc kubenswrapper[4844]: I0126 13:19:38.348633 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3764b649-1758-4f78-83b5-8a13118c9bc9","Type":"ContainerStarted","Data":"d87fb2ffe7bc2f7ab5797333cc9df60d6f399f35b1b649fb31cac650668cd76b"} Jan 26 13:19:38 crc kubenswrapper[4844]: I0126 13:19:38.349876 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 13:19:38 crc kubenswrapper[4844]: I0126 13:19:38.373812 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=7.3737937989999995 podStartE2EDuration="7.373793799s" podCreationTimestamp="2026-01-26 13:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:19:38.363239993 +0000 UTC m=+2155.296607605" watchObservedRunningTime="2026-01-26 13:19:38.373793799 +0000 UTC m=+2155.307161411" Jan 26 13:19:39 crc kubenswrapper[4844]: I0126 13:19:39.363325 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ed782618-8b69-4456-9aec-5184e765960f","Type":"ContainerStarted","Data":"2241b7110e18540a04d6ef710e0fbd5c297204daf480af4f3e67d95a9f508da2"} Jan 26 13:19:39 crc kubenswrapper[4844]: I0126 13:19:39.368610 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"75853a49-c21a-4df8-bcdf-0b160524e203","Type":"ContainerStarted","Data":"b5d353111f2b93f1baee3dfd3ee9952b515f5452840d77c09ab4146270b2393a"} Jan 26 13:19:39 crc kubenswrapper[4844]: I0126 13:19:39.385545 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.627216603 podStartE2EDuration="8.385529448s" podCreationTimestamp="2026-01-26 13:19:31 +0000 UTC" firstStartedPulling="2026-01-26 13:19:32.960033467 +0000 UTC m=+2149.893401079" lastFinishedPulling="2026-01-26 13:19:37.718346312 +0000 UTC m=+2154.651713924" observedRunningTime="2026-01-26 13:19:39.381015369 +0000 UTC m=+2156.314383001" watchObservedRunningTime="2026-01-26 13:19:39.385529448 +0000 UTC m=+2156.318897060" Jan 26 13:19:39 crc kubenswrapper[4844]: I0126 13:19:39.407659 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=3.646761925 podStartE2EDuration="8.407641302s" podCreationTimestamp="2026-01-26 13:19:31 +0000 UTC" firstStartedPulling="2026-01-26 13:19:32.958882839 +0000 UTC m=+2149.892250451" lastFinishedPulling="2026-01-26 13:19:37.719762226 +0000 UTC m=+2154.653129828" observedRunningTime="2026-01-26 13:19:39.400381927 +0000 UTC m=+2156.333749569" watchObservedRunningTime="2026-01-26 13:19:39.407641302 +0000 UTC m=+2156.341008914" Jan 26 13:19:40 crc kubenswrapper[4844]: I0126 13:19:40.376828 4844 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 13:19:40 crc kubenswrapper[4844]: I0126 13:19:40.630036 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 26 13:19:41 crc kubenswrapper[4844]: I0126 13:19:41.389929 4844 generic.go:334] "Generic (PLEG): container finished" podID="ef403703-395e-4db1-a9f5-a8e011e39ff2" containerID="5077f0e26a12144f58d459cbf7f199370b10cdd16c8f8cfa2de83245276a6c35" exitCode=0 Jan 26 13:19:41 crc kubenswrapper[4844]: I0126 13:19:41.390019 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ln4pq" event={"ID":"ef403703-395e-4db1-a9f5-a8e011e39ff2","Type":"ContainerDied","Data":"5077f0e26a12144f58d459cbf7f199370b10cdd16c8f8cfa2de83245276a6c35"} Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.017069 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.031389 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.032146 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.035125 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.036468 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.038714 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.064169 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.065244 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.078064 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.399532 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad438e4d-9282-48b8-88c1-1f974bb26b5e","Type":"ContainerStarted","Data":"3a3bf17791c32d5fb5b785576ef455a7cc2d45fedf3ba47cb171731a20b10664"} Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.400006 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.405528 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.440129 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.478931 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.858288 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.918923 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-scripts\") pod \"ef403703-395e-4db1-a9f5-a8e011e39ff2\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.919207 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-config-data\") pod \"ef403703-395e-4db1-a9f5-a8e011e39ff2\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.919449 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-fernet-keys\") pod \"ef403703-395e-4db1-a9f5-a8e011e39ff2\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.919611 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-credential-keys\") pod \"ef403703-395e-4db1-a9f5-a8e011e39ff2\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.919847 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dzz9\" (UniqueName: \"kubernetes.io/projected/ef403703-395e-4db1-a9f5-a8e011e39ff2-kube-api-access-6dzz9\") pod \"ef403703-395e-4db1-a9f5-a8e011e39ff2\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.919956 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-combined-ca-bundle\") pod \"ef403703-395e-4db1-a9f5-a8e011e39ff2\" (UID: \"ef403703-395e-4db1-a9f5-a8e011e39ff2\") " Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.927084 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-scripts" (OuterVolumeSpecName: "scripts") pod "ef403703-395e-4db1-a9f5-a8e011e39ff2" (UID: "ef403703-395e-4db1-a9f5-a8e011e39ff2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.927465 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ef403703-395e-4db1-a9f5-a8e011e39ff2" (UID: "ef403703-395e-4db1-a9f5-a8e011e39ff2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.928223 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ef403703-395e-4db1-a9f5-a8e011e39ff2" (UID: "ef403703-395e-4db1-a9f5-a8e011e39ff2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.937891 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef403703-395e-4db1-a9f5-a8e011e39ff2-kube-api-access-6dzz9" (OuterVolumeSpecName: "kube-api-access-6dzz9") pod "ef403703-395e-4db1-a9f5-a8e011e39ff2" (UID: "ef403703-395e-4db1-a9f5-a8e011e39ff2"). InnerVolumeSpecName "kube-api-access-6dzz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.959927 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-config-data" (OuterVolumeSpecName: "config-data") pod "ef403703-395e-4db1-a9f5-a8e011e39ff2" (UID: "ef403703-395e-4db1-a9f5-a8e011e39ff2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:42 crc kubenswrapper[4844]: I0126 13:19:42.961450 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef403703-395e-4db1-a9f5-a8e011e39ff2" (UID: "ef403703-395e-4db1-a9f5-a8e011e39ff2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.022743 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dzz9\" (UniqueName: \"kubernetes.io/projected/ef403703-395e-4db1-a9f5-a8e011e39ff2-kube-api-access-6dzz9\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.022775 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.022785 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.022794 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.022803 4844 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.022811 4844 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef403703-395e-4db1-a9f5-a8e011e39ff2-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.436327 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bt68v" event={"ID":"847c2c6b-16a5-4c1d-9122-81accf513fb4","Type":"ContainerDied","Data":"2949d309e80d3a15df54de2b1eef2a3f1d14c1d816a1ac2a78e45f1b801c0ae9"} Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.436269 4844 generic.go:334] "Generic (PLEG): container finished" podID="847c2c6b-16a5-4c1d-9122-81accf513fb4" containerID="2949d309e80d3a15df54de2b1eef2a3f1d14c1d816a1ac2a78e45f1b801c0ae9" exitCode=0 Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.444591 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ln4pq" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.445320 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ln4pq" event={"ID":"ef403703-395e-4db1-a9f5-a8e011e39ff2","Type":"ContainerDied","Data":"a72406668fda692ce46feb733741e04f3739de3a206a53ab5a4f35df5ca1d220"} Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.445351 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a72406668fda692ce46feb733741e04f3739de3a206a53ab5a4f35df5ca1d220" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.567169 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5db4cb7f67-85gvs"] Jan 26 13:19:43 crc kubenswrapper[4844]: E0126 13:19:43.567552 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef403703-395e-4db1-a9f5-a8e011e39ff2" containerName="keystone-bootstrap" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.567568 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef403703-395e-4db1-a9f5-a8e011e39ff2" containerName="keystone-bootstrap" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.567751 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef403703-395e-4db1-a9f5-a8e011e39ff2" containerName="keystone-bootstrap" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.568322 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.570999 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.571481 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l6kd4" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.571615 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.571777 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.571906 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.572010 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.598268 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5db4cb7f67-85gvs"] Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.638489 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-internal-tls-certs\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.638554 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-fernet-keys\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.638613 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgdwx\" (UniqueName: \"kubernetes.io/projected/d2096862-de7b-4d51-aa62-bc55d339a9dc-kube-api-access-rgdwx\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.638647 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-scripts\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.638696 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-config-data\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.638735 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-public-tls-certs\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.638776 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-combined-ca-bundle\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.638826 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-credential-keys\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.742833 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-credential-keys\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.742904 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-internal-tls-certs\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.742936 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-fernet-keys\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.742992 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgdwx\" (UniqueName: \"kubernetes.io/projected/d2096862-de7b-4d51-aa62-bc55d339a9dc-kube-api-access-rgdwx\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.743019 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-scripts\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.743057 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-config-data\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.743090 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-public-tls-certs\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.743123 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-combined-ca-bundle\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.747206 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-credential-keys\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.749156 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-combined-ca-bundle\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.751519 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-scripts\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.751580 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-internal-tls-certs\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.752102 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-config-data\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.754052 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-public-tls-certs\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.755049 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d2096862-de7b-4d51-aa62-bc55d339a9dc-fernet-keys\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.764580 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgdwx\" (UniqueName: \"kubernetes.io/projected/d2096862-de7b-4d51-aa62-bc55d339a9dc-kube-api-access-rgdwx\") pod \"keystone-5db4cb7f67-85gvs\" (UID: \"d2096862-de7b-4d51-aa62-bc55d339a9dc\") " pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:43 crc kubenswrapper[4844]: I0126 13:19:43.885120 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:44 crc kubenswrapper[4844]: I0126 13:19:44.459074 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dcfgm" event={"ID":"5f82260f-cde4-4197-8718-d7adebadeddb","Type":"ContainerStarted","Data":"e691abdd8667adb115d62dd072d4441593a9750fc8e01125dc49f5b64d4a7274"} Jan 26 13:19:44 crc kubenswrapper[4844]: I0126 13:19:44.470427 4844 generic.go:334] "Generic (PLEG): container finished" podID="ed782618-8b69-4456-9aec-5184e765960f" containerID="2241b7110e18540a04d6ef710e0fbd5c297204daf480af4f3e67d95a9f508da2" exitCode=1 Jan 26 13:19:44 crc kubenswrapper[4844]: I0126 13:19:44.471338 4844 scope.go:117] "RemoveContainer" containerID="2241b7110e18540a04d6ef710e0fbd5c297204daf480af4f3e67d95a9f508da2" Jan 26 13:19:44 crc kubenswrapper[4844]: I0126 13:19:44.472389 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ed782618-8b69-4456-9aec-5184e765960f","Type":"ContainerDied","Data":"2241b7110e18540a04d6ef710e0fbd5c297204daf480af4f3e67d95a9f508da2"} Jan 26 13:19:44 crc kubenswrapper[4844]: I0126 13:19:44.501143 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-dcfgm" podStartSLOduration=3.056405366 podStartE2EDuration="58.501124301s" podCreationTimestamp="2026-01-26 13:18:46 +0000 UTC" firstStartedPulling="2026-01-26 13:18:47.967084321 +0000 UTC m=+2104.900451933" lastFinishedPulling="2026-01-26 13:19:43.411803256 +0000 UTC m=+2160.345170868" observedRunningTime="2026-01-26 13:19:44.488945857 +0000 UTC m=+2161.422313469" watchObservedRunningTime="2026-01-26 13:19:44.501124301 +0000 UTC m=+2161.434491903" Jan 26 13:19:44 crc kubenswrapper[4844]: I0126 13:19:44.550164 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5db4cb7f67-85gvs"] Jan 26 13:19:44 crc kubenswrapper[4844]: I0126 13:19:44.904722 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bt68v" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.001256 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847c2c6b-16a5-4c1d-9122-81accf513fb4-config-data\") pod \"847c2c6b-16a5-4c1d-9122-81accf513fb4\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.001318 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rppll\" (UniqueName: \"kubernetes.io/projected/847c2c6b-16a5-4c1d-9122-81accf513fb4-kube-api-access-rppll\") pod \"847c2c6b-16a5-4c1d-9122-81accf513fb4\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.001365 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/847c2c6b-16a5-4c1d-9122-81accf513fb4-logs\") pod \"847c2c6b-16a5-4c1d-9122-81accf513fb4\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.001397 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/847c2c6b-16a5-4c1d-9122-81accf513fb4-scripts\") pod \"847c2c6b-16a5-4c1d-9122-81accf513fb4\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.001472 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847c2c6b-16a5-4c1d-9122-81accf513fb4-combined-ca-bundle\") pod \"847c2c6b-16a5-4c1d-9122-81accf513fb4\" (UID: \"847c2c6b-16a5-4c1d-9122-81accf513fb4\") " Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.002907 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/847c2c6b-16a5-4c1d-9122-81accf513fb4-logs" (OuterVolumeSpecName: "logs") pod "847c2c6b-16a5-4c1d-9122-81accf513fb4" (UID: "847c2c6b-16a5-4c1d-9122-81accf513fb4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.006225 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/847c2c6b-16a5-4c1d-9122-81accf513fb4-scripts" (OuterVolumeSpecName: "scripts") pod "847c2c6b-16a5-4c1d-9122-81accf513fb4" (UID: "847c2c6b-16a5-4c1d-9122-81accf513fb4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.023587 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/847c2c6b-16a5-4c1d-9122-81accf513fb4-kube-api-access-rppll" (OuterVolumeSpecName: "kube-api-access-rppll") pod "847c2c6b-16a5-4c1d-9122-81accf513fb4" (UID: "847c2c6b-16a5-4c1d-9122-81accf513fb4"). InnerVolumeSpecName "kube-api-access-rppll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.039401 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/847c2c6b-16a5-4c1d-9122-81accf513fb4-config-data" (OuterVolumeSpecName: "config-data") pod "847c2c6b-16a5-4c1d-9122-81accf513fb4" (UID: "847c2c6b-16a5-4c1d-9122-81accf513fb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.041643 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/847c2c6b-16a5-4c1d-9122-81accf513fb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "847c2c6b-16a5-4c1d-9122-81accf513fb4" (UID: "847c2c6b-16a5-4c1d-9122-81accf513fb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.103774 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847c2c6b-16a5-4c1d-9122-81accf513fb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.103809 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847c2c6b-16a5-4c1d-9122-81accf513fb4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.103819 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rppll\" (UniqueName: \"kubernetes.io/projected/847c2c6b-16a5-4c1d-9122-81accf513fb4-kube-api-access-rppll\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.103831 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/847c2c6b-16a5-4c1d-9122-81accf513fb4-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.103839 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/847c2c6b-16a5-4c1d-9122-81accf513fb4-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.492395 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bt68v" event={"ID":"847c2c6b-16a5-4c1d-9122-81accf513fb4","Type":"ContainerDied","Data":"5f7f3e5c6f941c49552d4bc5d5794b79e5b879305f9a61b25d70cf5ebffcd088"} Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.492782 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f7f3e5c6f941c49552d4bc5d5794b79e5b879305f9a61b25d70cf5ebffcd088" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.492754 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bt68v" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.501197 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ed782618-8b69-4456-9aec-5184e765960f","Type":"ContainerStarted","Data":"920a38a2c1e0977cbdcbd5e4c3757be17293c805c1c55b4e7ee718455c1317a2"} Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.506863 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5db4cb7f67-85gvs" event={"ID":"d2096862-de7b-4d51-aa62-bc55d339a9dc","Type":"ContainerStarted","Data":"d98ed62e0bb6ea6b66cbc2652291be8a5b5ea928c796f89c9926e57fdbd072b7"} Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.506904 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5db4cb7f67-85gvs" event={"ID":"d2096862-de7b-4d51-aa62-bc55d339a9dc","Type":"ContainerStarted","Data":"4ef61859d28b97363c2eb46b076b5e9a4d0a6f80915dab95add885d2fbde903b"} Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.507548 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.563015 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.563276 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="3764b649-1758-4f78-83b5-8a13118c9bc9" containerName="watcher-api-log" containerID="cri-o://1bc285b92109c80e805ca30c245d2f348bb4aa73f399fb09fecd5e0fa5064ace" gracePeriod=30 Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.563650 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="3764b649-1758-4f78-83b5-8a13118c9bc9" containerName="watcher-api" containerID="cri-o://d87fb2ffe7bc2f7ab5797333cc9df60d6f399f35b1b649fb31cac650668cd76b" gracePeriod=30 Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.581155 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5db4cb7f67-85gvs" podStartSLOduration=2.581133141 podStartE2EDuration="2.581133141s" podCreationTimestamp="2026-01-26 13:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:19:45.550455489 +0000 UTC m=+2162.483823101" watchObservedRunningTime="2026-01-26 13:19:45.581133141 +0000 UTC m=+2162.514500753" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.612147 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7ff9fb4f5b-dz4mq"] Jan 26 13:19:45 crc kubenswrapper[4844]: E0126 13:19:45.612802 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847c2c6b-16a5-4c1d-9122-81accf513fb4" containerName="placement-db-sync" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.612821 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="847c2c6b-16a5-4c1d-9122-81accf513fb4" containerName="placement-db-sync" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.613025 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="847c2c6b-16a5-4c1d-9122-81accf513fb4" containerName="placement-db-sync" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.614212 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.618579 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jwq7d" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.619073 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.619295 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.619455 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.619562 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.630806 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7ff9fb4f5b-dz4mq"] Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.718367 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm6n4\" (UniqueName: \"kubernetes.io/projected/624dd95f-3ed5-4837-908b-b5e6d47a1edf-kube-api-access-nm6n4\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.718445 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624dd95f-3ed5-4837-908b-b5e6d47a1edf-combined-ca-bundle\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.718498 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/624dd95f-3ed5-4837-908b-b5e6d47a1edf-public-tls-certs\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.718531 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/624dd95f-3ed5-4837-908b-b5e6d47a1edf-logs\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.718575 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/624dd95f-3ed5-4837-908b-b5e6d47a1edf-internal-tls-certs\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.718608 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/624dd95f-3ed5-4837-908b-b5e6d47a1edf-scripts\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.718624 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/624dd95f-3ed5-4837-908b-b5e6d47a1edf-config-data\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.819651 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/624dd95f-3ed5-4837-908b-b5e6d47a1edf-logs\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.819715 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/624dd95f-3ed5-4837-908b-b5e6d47a1edf-internal-tls-certs\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.819741 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/624dd95f-3ed5-4837-908b-b5e6d47a1edf-scripts\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.819760 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/624dd95f-3ed5-4837-908b-b5e6d47a1edf-config-data\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.819791 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm6n4\" (UniqueName: \"kubernetes.io/projected/624dd95f-3ed5-4837-908b-b5e6d47a1edf-kube-api-access-nm6n4\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.819838 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624dd95f-3ed5-4837-908b-b5e6d47a1edf-combined-ca-bundle\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.819880 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/624dd95f-3ed5-4837-908b-b5e6d47a1edf-public-tls-certs\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.820180 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/624dd95f-3ed5-4837-908b-b5e6d47a1edf-logs\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.824152 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/624dd95f-3ed5-4837-908b-b5e6d47a1edf-internal-tls-certs\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.824442 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624dd95f-3ed5-4837-908b-b5e6d47a1edf-combined-ca-bundle\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.829385 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/624dd95f-3ed5-4837-908b-b5e6d47a1edf-config-data\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.836423 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/624dd95f-3ed5-4837-908b-b5e6d47a1edf-public-tls-certs\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.842261 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm6n4\" (UniqueName: \"kubernetes.io/projected/624dd95f-3ed5-4837-908b-b5e6d47a1edf-kube-api-access-nm6n4\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.842396 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/624dd95f-3ed5-4837-908b-b5e6d47a1edf-scripts\") pod \"placement-7ff9fb4f5b-dz4mq\" (UID: \"624dd95f-3ed5-4837-908b-b5e6d47a1edf\") " pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.943796 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.966756 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:19:45 crc kubenswrapper[4844]: I0126 13:19:45.966803 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:19:46 crc kubenswrapper[4844]: E0126 13:19:46.056237 4844 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43fe5130_0714_4f40_9d6a_9384eb72fa0a.slice/crio-1b85fee309ae0e4dbc8b160f74806d6d702e7676b68d662560a47c021cd5f8a1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43fe5130_0714_4f40_9d6a_9384eb72fa0a.slice/crio-conmon-1b85fee309ae0e4dbc8b160f74806d6d702e7676b68d662560a47c021cd5f8a1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef403703_395e_4db1_a9f5_a8e011e39ff2.slice\": RecentStats: unable to find data in memory cache]" Jan 26 13:19:46 crc kubenswrapper[4844]: I0126 13:19:46.445901 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7ff9fb4f5b-dz4mq"] Jan 26 13:19:46 crc kubenswrapper[4844]: W0126 13:19:46.449834 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod624dd95f_3ed5_4837_908b_b5e6d47a1edf.slice/crio-cfa2ff6085b6ff00739c2326aadca113ff91a009fdadd2a574e6eeb602783a2d WatchSource:0}: Error finding container cfa2ff6085b6ff00739c2326aadca113ff91a009fdadd2a574e6eeb602783a2d: Status 404 returned error can't find the container with id cfa2ff6085b6ff00739c2326aadca113ff91a009fdadd2a574e6eeb602783a2d Jan 26 13:19:46 crc kubenswrapper[4844]: I0126 13:19:46.525171 4844 generic.go:334] "Generic (PLEG): container finished" podID="43fe5130-0714-4f40-9d6a-9384eb72fa0a" containerID="1b85fee309ae0e4dbc8b160f74806d6d702e7676b68d662560a47c021cd5f8a1" exitCode=0 Jan 26 13:19:46 crc kubenswrapper[4844]: I0126 13:19:46.525215 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2xnzf" event={"ID":"43fe5130-0714-4f40-9d6a-9384eb72fa0a","Type":"ContainerDied","Data":"1b85fee309ae0e4dbc8b160f74806d6d702e7676b68d662560a47c021cd5f8a1"} Jan 26 13:19:46 crc kubenswrapper[4844]: I0126 13:19:46.527873 4844 generic.go:334] "Generic (PLEG): container finished" podID="3764b649-1758-4f78-83b5-8a13118c9bc9" containerID="1bc285b92109c80e805ca30c245d2f348bb4aa73f399fb09fecd5e0fa5064ace" exitCode=143 Jan 26 13:19:46 crc kubenswrapper[4844]: I0126 13:19:46.527924 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3764b649-1758-4f78-83b5-8a13118c9bc9","Type":"ContainerDied","Data":"1bc285b92109c80e805ca30c245d2f348bb4aa73f399fb09fecd5e0fa5064ace"} Jan 26 13:19:46 crc kubenswrapper[4844]: I0126 13:19:46.529364 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ff9fb4f5b-dz4mq" event={"ID":"624dd95f-3ed5-4837-908b-b5e6d47a1edf","Type":"ContainerStarted","Data":"cfa2ff6085b6ff00739c2326aadca113ff91a009fdadd2a574e6eeb602783a2d"} Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.155966 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="3764b649-1758-4f78-83b5-8a13118c9bc9" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9322/\": dial tcp 10.217.0.163:9322: connect: connection refused" Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.156947 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="3764b649-1758-4f78-83b5-8a13118c9bc9" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.163:9322/\": read tcp 10.217.0.2:36728->10.217.0.163:9322: read: connection reset by peer" Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.541023 4844 generic.go:334] "Generic (PLEG): container finished" podID="3764b649-1758-4f78-83b5-8a13118c9bc9" containerID="d87fb2ffe7bc2f7ab5797333cc9df60d6f399f35b1b649fb31cac650668cd76b" exitCode=0 Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.541127 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3764b649-1758-4f78-83b5-8a13118c9bc9","Type":"ContainerDied","Data":"d87fb2ffe7bc2f7ab5797333cc9df60d6f399f35b1b649fb31cac650668cd76b"} Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.548635 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ff9fb4f5b-dz4mq" event={"ID":"624dd95f-3ed5-4837-908b-b5e6d47a1edf","Type":"ContainerStarted","Data":"37bf4fccae943b19453bc50fc7ee1973d15923a5ab6a2b5077c30e59dca90673"} Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.578569 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.666992 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3764b649-1758-4f78-83b5-8a13118c9bc9-combined-ca-bundle\") pod \"3764b649-1758-4f78-83b5-8a13118c9bc9\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.667143 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3764b649-1758-4f78-83b5-8a13118c9bc9-config-data\") pod \"3764b649-1758-4f78-83b5-8a13118c9bc9\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.667169 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3764b649-1758-4f78-83b5-8a13118c9bc9-custom-prometheus-ca\") pod \"3764b649-1758-4f78-83b5-8a13118c9bc9\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.667238 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttsgv\" (UniqueName: \"kubernetes.io/projected/3764b649-1758-4f78-83b5-8a13118c9bc9-kube-api-access-ttsgv\") pod \"3764b649-1758-4f78-83b5-8a13118c9bc9\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.667291 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3764b649-1758-4f78-83b5-8a13118c9bc9-logs\") pod \"3764b649-1758-4f78-83b5-8a13118c9bc9\" (UID: \"3764b649-1758-4f78-83b5-8a13118c9bc9\") " Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.668144 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3764b649-1758-4f78-83b5-8a13118c9bc9-logs" (OuterVolumeSpecName: "logs") pod "3764b649-1758-4f78-83b5-8a13118c9bc9" (UID: "3764b649-1758-4f78-83b5-8a13118c9bc9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.675344 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3764b649-1758-4f78-83b5-8a13118c9bc9-kube-api-access-ttsgv" (OuterVolumeSpecName: "kube-api-access-ttsgv") pod "3764b649-1758-4f78-83b5-8a13118c9bc9" (UID: "3764b649-1758-4f78-83b5-8a13118c9bc9"). InnerVolumeSpecName "kube-api-access-ttsgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.701701 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3764b649-1758-4f78-83b5-8a13118c9bc9-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "3764b649-1758-4f78-83b5-8a13118c9bc9" (UID: "3764b649-1758-4f78-83b5-8a13118c9bc9"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.706249 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3764b649-1758-4f78-83b5-8a13118c9bc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3764b649-1758-4f78-83b5-8a13118c9bc9" (UID: "3764b649-1758-4f78-83b5-8a13118c9bc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.740218 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3764b649-1758-4f78-83b5-8a13118c9bc9-config-data" (OuterVolumeSpecName: "config-data") pod "3764b649-1758-4f78-83b5-8a13118c9bc9" (UID: "3764b649-1758-4f78-83b5-8a13118c9bc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.769911 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3764b649-1758-4f78-83b5-8a13118c9bc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.769954 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3764b649-1758-4f78-83b5-8a13118c9bc9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.769966 4844 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3764b649-1758-4f78-83b5-8a13118c9bc9-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.769977 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttsgv\" (UniqueName: \"kubernetes.io/projected/3764b649-1758-4f78-83b5-8a13118c9bc9-kube-api-access-ttsgv\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:47 crc kubenswrapper[4844]: I0126 13:19:47.769991 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3764b649-1758-4f78-83b5-8a13118c9bc9-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.035355 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2xnzf" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.177302 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43fe5130-0714-4f40-9d6a-9384eb72fa0a-db-sync-config-data\") pod \"43fe5130-0714-4f40-9d6a-9384eb72fa0a\" (UID: \"43fe5130-0714-4f40-9d6a-9384eb72fa0a\") " Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.177402 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43fe5130-0714-4f40-9d6a-9384eb72fa0a-combined-ca-bundle\") pod \"43fe5130-0714-4f40-9d6a-9384eb72fa0a\" (UID: \"43fe5130-0714-4f40-9d6a-9384eb72fa0a\") " Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.177470 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjs6c\" (UniqueName: \"kubernetes.io/projected/43fe5130-0714-4f40-9d6a-9384eb72fa0a-kube-api-access-pjs6c\") pod \"43fe5130-0714-4f40-9d6a-9384eb72fa0a\" (UID: \"43fe5130-0714-4f40-9d6a-9384eb72fa0a\") " Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.180967 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43fe5130-0714-4f40-9d6a-9384eb72fa0a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "43fe5130-0714-4f40-9d6a-9384eb72fa0a" (UID: "43fe5130-0714-4f40-9d6a-9384eb72fa0a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.181285 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43fe5130-0714-4f40-9d6a-9384eb72fa0a-kube-api-access-pjs6c" (OuterVolumeSpecName: "kube-api-access-pjs6c") pod "43fe5130-0714-4f40-9d6a-9384eb72fa0a" (UID: "43fe5130-0714-4f40-9d6a-9384eb72fa0a"). InnerVolumeSpecName "kube-api-access-pjs6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.201298 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43fe5130-0714-4f40-9d6a-9384eb72fa0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43fe5130-0714-4f40-9d6a-9384eb72fa0a" (UID: "43fe5130-0714-4f40-9d6a-9384eb72fa0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.279208 4844 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43fe5130-0714-4f40-9d6a-9384eb72fa0a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.279240 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43fe5130-0714-4f40-9d6a-9384eb72fa0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.279249 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjs6c\" (UniqueName: \"kubernetes.io/projected/43fe5130-0714-4f40-9d6a-9384eb72fa0a-kube-api-access-pjs6c\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.565001 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ff9fb4f5b-dz4mq" event={"ID":"624dd95f-3ed5-4837-908b-b5e6d47a1edf","Type":"ContainerStarted","Data":"ecde6a96f70757351ff846113e49517904884c005cda20703c77bc858dcab126"} Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.566083 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.566113 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.569423 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2xnzf" event={"ID":"43fe5130-0714-4f40-9d6a-9384eb72fa0a","Type":"ContainerDied","Data":"17d0e08fa3d49eb72b7bb19d2d9180f46f5752d37fdfe0559f596dc57f039192"} Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.569456 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17d0e08fa3d49eb72b7bb19d2d9180f46f5752d37fdfe0559f596dc57f039192" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.569498 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2xnzf" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.575212 4844 generic.go:334] "Generic (PLEG): container finished" podID="ed782618-8b69-4456-9aec-5184e765960f" containerID="920a38a2c1e0977cbdcbd5e4c3757be17293c805c1c55b4e7ee718455c1317a2" exitCode=1 Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.575305 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ed782618-8b69-4456-9aec-5184e765960f","Type":"ContainerDied","Data":"920a38a2c1e0977cbdcbd5e4c3757be17293c805c1c55b4e7ee718455c1317a2"} Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.575344 4844 scope.go:117] "RemoveContainer" containerID="2241b7110e18540a04d6ef710e0fbd5c297204daf480af4f3e67d95a9f508da2" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.576022 4844 scope.go:117] "RemoveContainer" containerID="920a38a2c1e0977cbdcbd5e4c3757be17293c805c1c55b4e7ee718455c1317a2" Jan 26 13:19:48 crc kubenswrapper[4844]: E0126 13:19:48.576283 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ed782618-8b69-4456-9aec-5184e765960f)\"" pod="openstack/watcher-decision-engine-0" podUID="ed782618-8b69-4456-9aec-5184e765960f" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.586655 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3764b649-1758-4f78-83b5-8a13118c9bc9","Type":"ContainerDied","Data":"31c38c0623372ddcdbd48510b71d6a9ab644e2c9755a0a9a376acebbd08ed103"} Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.586744 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.588436 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7ff9fb4f5b-dz4mq" podStartSLOduration=3.588416574 podStartE2EDuration="3.588416574s" podCreationTimestamp="2026-01-26 13:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:19:48.582312227 +0000 UTC m=+2165.515679829" watchObservedRunningTime="2026-01-26 13:19:48.588416574 +0000 UTC m=+2165.521784186" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.626296 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.633278 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.645141 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.653296 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 26 13:19:48 crc kubenswrapper[4844]: E0126 13:19:48.653684 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3764b649-1758-4f78-83b5-8a13118c9bc9" containerName="watcher-api-log" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.653698 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="3764b649-1758-4f78-83b5-8a13118c9bc9" containerName="watcher-api-log" Jan 26 13:19:48 crc kubenswrapper[4844]: E0126 13:19:48.653722 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3764b649-1758-4f78-83b5-8a13118c9bc9" containerName="watcher-api" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.653729 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="3764b649-1758-4f78-83b5-8a13118c9bc9" containerName="watcher-api" Jan 26 13:19:48 crc kubenswrapper[4844]: E0126 13:19:48.653742 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43fe5130-0714-4f40-9d6a-9384eb72fa0a" containerName="barbican-db-sync" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.653748 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="43fe5130-0714-4f40-9d6a-9384eb72fa0a" containerName="barbican-db-sync" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.653909 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="3764b649-1758-4f78-83b5-8a13118c9bc9" containerName="watcher-api" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.653927 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="3764b649-1758-4f78-83b5-8a13118c9bc9" containerName="watcher-api-log" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.653947 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="43fe5130-0714-4f40-9d6a-9384eb72fa0a" containerName="barbican-db-sync" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.655074 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.660167 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.660427 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.660670 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.678785 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.776475 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5757498f95-q5d7h"] Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.777910 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5757498f95-q5d7h" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.780712 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.780986 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-dzsvq" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.781109 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.788739 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.788777 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.788809 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fxmp\" (UniqueName: \"kubernetes.io/projected/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-kube-api-access-4fxmp\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.788865 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-public-tls-certs\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.788912 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-logs\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.788975 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.789021 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-config-data\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.795703 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5757498f95-q5d7h"] Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.802068 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-688b4ff97d-t5mvg"] Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.806087 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.812123 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.828691 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-688b4ff97d-t5mvg"] Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.864321 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf"] Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.866305 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.872332 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf"] Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.893952 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-logs\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.894066 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.894100 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-config-data\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.894157 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.894173 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.894211 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f64e9d9a-09d6-4843-a829-d4fbdcaadb65-logs\") pod \"barbican-worker-5757498f95-q5d7h\" (UID: \"f64e9d9a-09d6-4843-a829-d4fbdcaadb65\") " pod="openstack/barbican-worker-5757498f95-q5d7h" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.894356 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f64e9d9a-09d6-4843-a829-d4fbdcaadb65-config-data\") pod \"barbican-worker-5757498f95-q5d7h\" (UID: \"f64e9d9a-09d6-4843-a829-d4fbdcaadb65\") " pod="openstack/barbican-worker-5757498f95-q5d7h" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.894376 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f64e9d9a-09d6-4843-a829-d4fbdcaadb65-combined-ca-bundle\") pod \"barbican-worker-5757498f95-q5d7h\" (UID: \"f64e9d9a-09d6-4843-a829-d4fbdcaadb65\") " pod="openstack/barbican-worker-5757498f95-q5d7h" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.894400 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fxmp\" (UniqueName: \"kubernetes.io/projected/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-kube-api-access-4fxmp\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.894440 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-logs\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.894469 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-public-tls-certs\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.894500 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rf9t\" (UniqueName: \"kubernetes.io/projected/f64e9d9a-09d6-4843-a829-d4fbdcaadb65-kube-api-access-8rf9t\") pod \"barbican-worker-5757498f95-q5d7h\" (UID: \"f64e9d9a-09d6-4843-a829-d4fbdcaadb65\") " pod="openstack/barbican-worker-5757498f95-q5d7h" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.894524 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f64e9d9a-09d6-4843-a829-d4fbdcaadb65-config-data-custom\") pod \"barbican-worker-5757498f95-q5d7h\" (UID: \"f64e9d9a-09d6-4843-a829-d4fbdcaadb65\") " pod="openstack/barbican-worker-5757498f95-q5d7h" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.903361 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-config-data\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.917521 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.926304 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.928567 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-public-tls-certs\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.934313 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.935238 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fxmp\" (UniqueName: \"kubernetes.io/projected/33ecc4c6-320a-41d8-a7c2-608bdda02b0a-kube-api-access-4fxmp\") pod \"watcher-api-0\" (UID: \"33ecc4c6-320a-41d8-a7c2-608bdda02b0a\") " pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.943279 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6666d497b6-ksrz2"] Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.945469 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.950100 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.986654 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.992511 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6666d497b6-ksrz2"] Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.997833 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-dns-svc\") pod \"dnsmasq-dns-7c5dd4c5cf-fhfbf\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.997889 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56958656-f467-485d-a3b6-9ecacb7edfeb-config-data-custom\") pod \"barbican-keystone-listener-688b4ff97d-t5mvg\" (UID: \"56958656-f467-485d-a3b6-9ecacb7edfeb\") " pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.997914 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56958656-f467-485d-a3b6-9ecacb7edfeb-config-data\") pod \"barbican-keystone-listener-688b4ff97d-t5mvg\" (UID: \"56958656-f467-485d-a3b6-9ecacb7edfeb\") " pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.997952 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f64e9d9a-09d6-4843-a829-d4fbdcaadb65-logs\") pod \"barbican-worker-5757498f95-q5d7h\" (UID: \"f64e9d9a-09d6-4843-a829-d4fbdcaadb65\") " pod="openstack/barbican-worker-5757498f95-q5d7h" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.997970 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f64e9d9a-09d6-4843-a829-d4fbdcaadb65-config-data\") pod \"barbican-worker-5757498f95-q5d7h\" (UID: \"f64e9d9a-09d6-4843-a829-d4fbdcaadb65\") " pod="openstack/barbican-worker-5757498f95-q5d7h" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.997989 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f64e9d9a-09d6-4843-a829-d4fbdcaadb65-combined-ca-bundle\") pod \"barbican-worker-5757498f95-q5d7h\" (UID: \"f64e9d9a-09d6-4843-a829-d4fbdcaadb65\") " pod="openstack/barbican-worker-5757498f95-q5d7h" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.998019 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56958656-f467-485d-a3b6-9ecacb7edfeb-combined-ca-bundle\") pod \"barbican-keystone-listener-688b4ff97d-t5mvg\" (UID: \"56958656-f467-485d-a3b6-9ecacb7edfeb\") " pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.998045 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-ovsdbserver-sb\") pod \"dnsmasq-dns-7c5dd4c5cf-fhfbf\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.998072 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-dns-swift-storage-0\") pod \"dnsmasq-dns-7c5dd4c5cf-fhfbf\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.998101 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sl5f\" (UniqueName: \"kubernetes.io/projected/56958656-f467-485d-a3b6-9ecacb7edfeb-kube-api-access-2sl5f\") pod \"barbican-keystone-listener-688b4ff97d-t5mvg\" (UID: \"56958656-f467-485d-a3b6-9ecacb7edfeb\") " pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.998134 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rf9t\" (UniqueName: \"kubernetes.io/projected/f64e9d9a-09d6-4843-a829-d4fbdcaadb65-kube-api-access-8rf9t\") pod \"barbican-worker-5757498f95-q5d7h\" (UID: \"f64e9d9a-09d6-4843-a829-d4fbdcaadb65\") " pod="openstack/barbican-worker-5757498f95-q5d7h" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.998159 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f64e9d9a-09d6-4843-a829-d4fbdcaadb65-config-data-custom\") pod \"barbican-worker-5757498f95-q5d7h\" (UID: \"f64e9d9a-09d6-4843-a829-d4fbdcaadb65\") " pod="openstack/barbican-worker-5757498f95-q5d7h" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.998181 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-ovsdbserver-nb\") pod \"dnsmasq-dns-7c5dd4c5cf-fhfbf\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.998215 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-config\") pod \"dnsmasq-dns-7c5dd4c5cf-fhfbf\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.998251 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq6xq\" (UniqueName: \"kubernetes.io/projected/bd10a394-bca1-4dd2-9441-2c9d4919f35e-kube-api-access-nq6xq\") pod \"dnsmasq-dns-7c5dd4c5cf-fhfbf\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.998278 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56958656-f467-485d-a3b6-9ecacb7edfeb-logs\") pod \"barbican-keystone-listener-688b4ff97d-t5mvg\" (UID: \"56958656-f467-485d-a3b6-9ecacb7edfeb\") " pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" Jan 26 13:19:48 crc kubenswrapper[4844]: I0126 13:19:48.998711 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f64e9d9a-09d6-4843-a829-d4fbdcaadb65-logs\") pod \"barbican-worker-5757498f95-q5d7h\" (UID: \"f64e9d9a-09d6-4843-a829-d4fbdcaadb65\") " pod="openstack/barbican-worker-5757498f95-q5d7h" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.002169 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f64e9d9a-09d6-4843-a829-d4fbdcaadb65-combined-ca-bundle\") pod \"barbican-worker-5757498f95-q5d7h\" (UID: \"f64e9d9a-09d6-4843-a829-d4fbdcaadb65\") " pod="openstack/barbican-worker-5757498f95-q5d7h" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.002930 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f64e9d9a-09d6-4843-a829-d4fbdcaadb65-config-data\") pod \"barbican-worker-5757498f95-q5d7h\" (UID: \"f64e9d9a-09d6-4843-a829-d4fbdcaadb65\") " pod="openstack/barbican-worker-5757498f95-q5d7h" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.008364 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f64e9d9a-09d6-4843-a829-d4fbdcaadb65-config-data-custom\") pod \"barbican-worker-5757498f95-q5d7h\" (UID: \"f64e9d9a-09d6-4843-a829-d4fbdcaadb65\") " pod="openstack/barbican-worker-5757498f95-q5d7h" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.029340 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rf9t\" (UniqueName: \"kubernetes.io/projected/f64e9d9a-09d6-4843-a829-d4fbdcaadb65-kube-api-access-8rf9t\") pod \"barbican-worker-5757498f95-q5d7h\" (UID: \"f64e9d9a-09d6-4843-a829-d4fbdcaadb65\") " pod="openstack/barbican-worker-5757498f95-q5d7h" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.099378 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-config\") pod \"dnsmasq-dns-7c5dd4c5cf-fhfbf\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.099439 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-logs\") pod \"barbican-api-6666d497b6-ksrz2\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.099467 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq6xq\" (UniqueName: \"kubernetes.io/projected/bd10a394-bca1-4dd2-9441-2c9d4919f35e-kube-api-access-nq6xq\") pod \"dnsmasq-dns-7c5dd4c5cf-fhfbf\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.099490 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-config-data\") pod \"barbican-api-6666d497b6-ksrz2\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.099511 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56958656-f467-485d-a3b6-9ecacb7edfeb-logs\") pod \"barbican-keystone-listener-688b4ff97d-t5mvg\" (UID: \"56958656-f467-485d-a3b6-9ecacb7edfeb\") " pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.099551 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qc4v\" (UniqueName: \"kubernetes.io/projected/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-kube-api-access-2qc4v\") pod \"barbican-api-6666d497b6-ksrz2\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.099570 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-dns-svc\") pod \"dnsmasq-dns-7c5dd4c5cf-fhfbf\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.099605 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56958656-f467-485d-a3b6-9ecacb7edfeb-config-data-custom\") pod \"barbican-keystone-listener-688b4ff97d-t5mvg\" (UID: \"56958656-f467-485d-a3b6-9ecacb7edfeb\") " pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.099624 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56958656-f467-485d-a3b6-9ecacb7edfeb-config-data\") pod \"barbican-keystone-listener-688b4ff97d-t5mvg\" (UID: \"56958656-f467-485d-a3b6-9ecacb7edfeb\") " pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.099640 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-config-data-custom\") pod \"barbican-api-6666d497b6-ksrz2\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.099678 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56958656-f467-485d-a3b6-9ecacb7edfeb-combined-ca-bundle\") pod \"barbican-keystone-listener-688b4ff97d-t5mvg\" (UID: \"56958656-f467-485d-a3b6-9ecacb7edfeb\") " pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.099698 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-ovsdbserver-sb\") pod \"dnsmasq-dns-7c5dd4c5cf-fhfbf\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.099723 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-dns-swift-storage-0\") pod \"dnsmasq-dns-7c5dd4c5cf-fhfbf\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.099747 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sl5f\" (UniqueName: \"kubernetes.io/projected/56958656-f467-485d-a3b6-9ecacb7edfeb-kube-api-access-2sl5f\") pod \"barbican-keystone-listener-688b4ff97d-t5mvg\" (UID: \"56958656-f467-485d-a3b6-9ecacb7edfeb\") " pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.099774 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-combined-ca-bundle\") pod \"barbican-api-6666d497b6-ksrz2\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.099798 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-ovsdbserver-nb\") pod \"dnsmasq-dns-7c5dd4c5cf-fhfbf\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.101208 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-ovsdbserver-nb\") pod \"dnsmasq-dns-7c5dd4c5cf-fhfbf\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.101417 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-config\") pod \"dnsmasq-dns-7c5dd4c5cf-fhfbf\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.101461 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56958656-f467-485d-a3b6-9ecacb7edfeb-logs\") pod \"barbican-keystone-listener-688b4ff97d-t5mvg\" (UID: \"56958656-f467-485d-a3b6-9ecacb7edfeb\") " pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.101805 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-dns-swift-storage-0\") pod \"dnsmasq-dns-7c5dd4c5cf-fhfbf\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.102033 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-ovsdbserver-sb\") pod \"dnsmasq-dns-7c5dd4c5cf-fhfbf\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.102736 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-dns-svc\") pod \"dnsmasq-dns-7c5dd4c5cf-fhfbf\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.105414 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56958656-f467-485d-a3b6-9ecacb7edfeb-config-data\") pod \"barbican-keystone-listener-688b4ff97d-t5mvg\" (UID: \"56958656-f467-485d-a3b6-9ecacb7edfeb\") " pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.106001 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5757498f95-q5d7h" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.108531 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56958656-f467-485d-a3b6-9ecacb7edfeb-combined-ca-bundle\") pod \"barbican-keystone-listener-688b4ff97d-t5mvg\" (UID: \"56958656-f467-485d-a3b6-9ecacb7edfeb\") " pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.114411 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56958656-f467-485d-a3b6-9ecacb7edfeb-config-data-custom\") pod \"barbican-keystone-listener-688b4ff97d-t5mvg\" (UID: \"56958656-f467-485d-a3b6-9ecacb7edfeb\") " pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.116365 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq6xq\" (UniqueName: \"kubernetes.io/projected/bd10a394-bca1-4dd2-9441-2c9d4919f35e-kube-api-access-nq6xq\") pod \"dnsmasq-dns-7c5dd4c5cf-fhfbf\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.118107 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sl5f\" (UniqueName: \"kubernetes.io/projected/56958656-f467-485d-a3b6-9ecacb7edfeb-kube-api-access-2sl5f\") pod \"barbican-keystone-listener-688b4ff97d-t5mvg\" (UID: \"56958656-f467-485d-a3b6-9ecacb7edfeb\") " pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.123402 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.184061 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.201427 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-logs\") pod \"barbican-api-6666d497b6-ksrz2\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.201482 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-config-data\") pod \"barbican-api-6666d497b6-ksrz2\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.201545 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qc4v\" (UniqueName: \"kubernetes.io/projected/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-kube-api-access-2qc4v\") pod \"barbican-api-6666d497b6-ksrz2\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.201577 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-config-data-custom\") pod \"barbican-api-6666d497b6-ksrz2\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.201665 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-combined-ca-bundle\") pod \"barbican-api-6666d497b6-ksrz2\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.202250 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-logs\") pod \"barbican-api-6666d497b6-ksrz2\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.205318 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-combined-ca-bundle\") pod \"barbican-api-6666d497b6-ksrz2\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.206228 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-config-data-custom\") pod \"barbican-api-6666d497b6-ksrz2\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.207396 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-config-data\") pod \"barbican-api-6666d497b6-ksrz2\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.217658 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qc4v\" (UniqueName: \"kubernetes.io/projected/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-kube-api-access-2qc4v\") pod \"barbican-api-6666d497b6-ksrz2\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.325002 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3764b649-1758-4f78-83b5-8a13118c9bc9" path="/var/lib/kubelet/pods/3764b649-1758-4f78-83b5-8a13118c9bc9/volumes" Jan 26 13:19:49 crc kubenswrapper[4844]: I0126 13:19:49.394982 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.029573 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.529772 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-58b8c47bc6-5s5z9"] Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.532423 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.540041 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58b8c47bc6-5s5z9"] Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.568360 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.568665 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.670788 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-internal-tls-certs\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.670838 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-combined-ca-bundle\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.670894 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5cmn\" (UniqueName: \"kubernetes.io/projected/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-kube-api-access-s5cmn\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.670917 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-config-data\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.671843 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-public-tls-certs\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.672065 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-logs\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.672115 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-config-data-custom\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.774347 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-internal-tls-certs\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.774413 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-combined-ca-bundle\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.774484 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5cmn\" (UniqueName: \"kubernetes.io/projected/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-kube-api-access-s5cmn\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.774516 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-config-data\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.774570 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-public-tls-certs\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.774669 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-logs\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.774868 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-config-data-custom\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.775345 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-logs\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.781883 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-config-data-custom\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.781919 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-internal-tls-certs\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.788286 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-combined-ca-bundle\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.790164 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-config-data\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.803790 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5cmn\" (UniqueName: \"kubernetes.io/projected/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-kube-api-access-s5cmn\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.803960 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f2cf574-1917-4f2b-adba-02bcf6cb4dc8-public-tls-certs\") pod \"barbican-api-58b8c47bc6-5s5z9\" (UID: \"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8\") " pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:51 crc kubenswrapper[4844]: I0126 13:19:51.883877 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:52 crc kubenswrapper[4844]: I0126 13:19:52.016183 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 13:19:52 crc kubenswrapper[4844]: I0126 13:19:52.016233 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 13:19:52 crc kubenswrapper[4844]: I0126 13:19:52.017574 4844 scope.go:117] "RemoveContainer" containerID="920a38a2c1e0977cbdcbd5e4c3757be17293c805c1c55b4e7ee718455c1317a2" Jan 26 13:19:52 crc kubenswrapper[4844]: E0126 13:19:52.017804 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ed782618-8b69-4456-9aec-5184e765960f)\"" pod="openstack/watcher-decision-engine-0" podUID="ed782618-8b69-4456-9aec-5184e765960f" Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.197359 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.346106 4844 scope.go:117] "RemoveContainer" containerID="d87fb2ffe7bc2f7ab5797333cc9df60d6f399f35b1b649fb31cac650668cd76b" Jan 26 13:19:55 crc kubenswrapper[4844]: W0126 13:19:55.353006 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33ecc4c6_320a_41d8_a7c2_608bdda02b0a.slice/crio-60c8a840561ca287ba13307b60fad39f0f977193535a32ffcfbd0c23ae0943e4 WatchSource:0}: Error finding container 60c8a840561ca287ba13307b60fad39f0f977193535a32ffcfbd0c23ae0943e4: Status 404 returned error can't find the container with id 60c8a840561ca287ba13307b60fad39f0f977193535a32ffcfbd0c23ae0943e4 Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.405555 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nw6hp"] Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.409020 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nw6hp" Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.418115 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nw6hp"] Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.536326 4844 scope.go:117] "RemoveContainer" containerID="1bc285b92109c80e805ca30c245d2f348bb4aa73f399fb09fecd5e0fa5064ace" Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.564438 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9-catalog-content\") pod \"redhat-marketplace-nw6hp\" (UID: \"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9\") " pod="openshift-marketplace/redhat-marketplace-nw6hp" Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.564515 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9-utilities\") pod \"redhat-marketplace-nw6hp\" (UID: \"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9\") " pod="openshift-marketplace/redhat-marketplace-nw6hp" Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.564574 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69b6s\" (UniqueName: \"kubernetes.io/projected/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9-kube-api-access-69b6s\") pod \"redhat-marketplace-nw6hp\" (UID: \"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9\") " pod="openshift-marketplace/redhat-marketplace-nw6hp" Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.659203 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"33ecc4c6-320a-41d8-a7c2-608bdda02b0a","Type":"ContainerStarted","Data":"60c8a840561ca287ba13307b60fad39f0f977193535a32ffcfbd0c23ae0943e4"} Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.667921 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9-utilities\") pod \"redhat-marketplace-nw6hp\" (UID: \"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9\") " pod="openshift-marketplace/redhat-marketplace-nw6hp" Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.668007 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69b6s\" (UniqueName: \"kubernetes.io/projected/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9-kube-api-access-69b6s\") pod \"redhat-marketplace-nw6hp\" (UID: \"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9\") " pod="openshift-marketplace/redhat-marketplace-nw6hp" Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.668113 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9-catalog-content\") pod \"redhat-marketplace-nw6hp\" (UID: \"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9\") " pod="openshift-marketplace/redhat-marketplace-nw6hp" Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.668518 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9-catalog-content\") pod \"redhat-marketplace-nw6hp\" (UID: \"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9\") " pod="openshift-marketplace/redhat-marketplace-nw6hp" Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.668671 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9-utilities\") pod \"redhat-marketplace-nw6hp\" (UID: \"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9\") " pod="openshift-marketplace/redhat-marketplace-nw6hp" Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.689721 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69b6s\" (UniqueName: \"kubernetes.io/projected/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9-kube-api-access-69b6s\") pod \"redhat-marketplace-nw6hp\" (UID: \"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9\") " pod="openshift-marketplace/redhat-marketplace-nw6hp" Jan 26 13:19:55 crc kubenswrapper[4844]: E0126 13:19:55.782356 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="ad438e4d-9282-48b8-88c1-1f974bb26b5e" Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.885109 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nw6hp" Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.934619 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6666d497b6-ksrz2"] Jan 26 13:19:55 crc kubenswrapper[4844]: W0126 13:19:55.944623 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf64e9d9a_09d6_4843_a829_d4fbdcaadb65.slice/crio-c076aeaa35bda6c29981235e46c1166ef210b69441b6fe8c5547f85d0a850215 WatchSource:0}: Error finding container c076aeaa35bda6c29981235e46c1166ef210b69441b6fe8c5547f85d0a850215: Status 404 returned error can't find the container with id c076aeaa35bda6c29981235e46c1166ef210b69441b6fe8c5547f85d0a850215 Jan 26 13:19:55 crc kubenswrapper[4844]: I0126 13:19:55.948314 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5757498f95-q5d7h"] Jan 26 13:19:55 crc kubenswrapper[4844]: W0126 13:19:55.956197 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd36d4c6a_dac1_4d35_bd0b_597c8e5ffaf5.slice/crio-4bb4ebfbd66bf4dd4c9673aaeb869174f01842e37953ce2438e54959278afe70 WatchSource:0}: Error finding container 4bb4ebfbd66bf4dd4c9673aaeb869174f01842e37953ce2438e54959278afe70: Status 404 returned error can't find the container with id 4bb4ebfbd66bf4dd4c9673aaeb869174f01842e37953ce2438e54959278afe70 Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.321651 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-688b4ff97d-t5mvg"] Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.337945 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58b8c47bc6-5s5z9"] Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.374186 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf"] Jan 26 13:19:56 crc kubenswrapper[4844]: E0126 13:19:56.396796 4844 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef403703_395e_4db1_a9f5_a8e011e39ff2.slice\": RecentStats: unable to find data in memory cache]" Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.397988 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nw6hp"] Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.691676 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58b8c47bc6-5s5z9" event={"ID":"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8","Type":"ContainerStarted","Data":"1a4cc4a73f3be2c0350e373285ac19d5df45efdeb2fd821a8e5353804f821704"} Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.698390 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"33ecc4c6-320a-41d8-a7c2-608bdda02b0a","Type":"ContainerStarted","Data":"65aa2dc32d0ba332ceb839629dece6215d57addc3bc70e2fc418a0551f7c0abb"} Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.698433 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"33ecc4c6-320a-41d8-a7c2-608bdda02b0a","Type":"ContainerStarted","Data":"28911a179096403c640d66a2468ba8114c384d2aec105e4142fba559ad32f78c"} Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.698722 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.705075 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6666d497b6-ksrz2" event={"ID":"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5","Type":"ContainerStarted","Data":"459349c5b6003c5194b259d9dfe845ec6cbdb10a3b68864239804bd4ba2b2223"} Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.705127 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6666d497b6-ksrz2" event={"ID":"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5","Type":"ContainerStarted","Data":"4bb4ebfbd66bf4dd4c9673aaeb869174f01842e37953ce2438e54959278afe70"} Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.709848 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5757498f95-q5d7h" event={"ID":"f64e9d9a-09d6-4843-a829-d4fbdcaadb65","Type":"ContainerStarted","Data":"c076aeaa35bda6c29981235e46c1166ef210b69441b6fe8c5547f85d0a850215"} Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.718674 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=8.718658999 podStartE2EDuration="8.718658999s" podCreationTimestamp="2026-01-26 13:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:19:56.716580789 +0000 UTC m=+2173.649948401" watchObservedRunningTime="2026-01-26 13:19:56.718658999 +0000 UTC m=+2173.652026611" Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.719353 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nw6hp" event={"ID":"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9","Type":"ContainerStarted","Data":"9677f88cf6b0406282608b08bc0f7c519bd58eee9ea0ec470895cecad1953d8b"} Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.720648 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" event={"ID":"bd10a394-bca1-4dd2-9441-2c9d4919f35e","Type":"ContainerStarted","Data":"ca2a24afe192a425f33081ff426ec08801373acd48f628c9684282b255c10c04"} Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.729971 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad438e4d-9282-48b8-88c1-1f974bb26b5e","Type":"ContainerStarted","Data":"ad476bc1efb544be8d2c9fbe1af2f2f828a808bb6462da7de2e6cca2959f02de"} Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.730124 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad438e4d-9282-48b8-88c1-1f974bb26b5e" containerName="ceilometer-notification-agent" containerID="cri-o://282ef0f047b2f4b694df966e27dbe553b91659664164f94cf8c45a10a3267d7f" gracePeriod=30 Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.730308 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.730323 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad438e4d-9282-48b8-88c1-1f974bb26b5e" containerName="proxy-httpd" containerID="cri-o://ad476bc1efb544be8d2c9fbe1af2f2f828a808bb6462da7de2e6cca2959f02de" gracePeriod=30 Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.730365 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad438e4d-9282-48b8-88c1-1f974bb26b5e" containerName="sg-core" containerID="cri-o://3a3bf17791c32d5fb5b785576ef455a7cc2d45fedf3ba47cb171731a20b10664" gracePeriod=30 Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.770033 4844 generic.go:334] "Generic (PLEG): container finished" podID="5f82260f-cde4-4197-8718-d7adebadeddb" containerID="e691abdd8667adb115d62dd072d4441593a9750fc8e01125dc49f5b64d4a7274" exitCode=0 Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.770284 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dcfgm" event={"ID":"5f82260f-cde4-4197-8718-d7adebadeddb","Type":"ContainerDied","Data":"e691abdd8667adb115d62dd072d4441593a9750fc8e01125dc49f5b64d4a7274"} Jan 26 13:19:56 crc kubenswrapper[4844]: I0126 13:19:56.778373 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" event={"ID":"56958656-f467-485d-a3b6-9ecacb7edfeb","Type":"ContainerStarted","Data":"3e07e238c3f5b9814ab3da278740f2631e53abae6b2e5351a9496a96976b3374"} Jan 26 13:19:57 crc kubenswrapper[4844]: I0126 13:19:57.796970 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58b8c47bc6-5s5z9" event={"ID":"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8","Type":"ContainerStarted","Data":"396325b33506e043595bd920f23812bd3c793507e93c25004c402a5be68578d9"} Jan 26 13:19:57 crc kubenswrapper[4844]: I0126 13:19:57.802256 4844 generic.go:334] "Generic (PLEG): container finished" podID="ad438e4d-9282-48b8-88c1-1f974bb26b5e" containerID="ad476bc1efb544be8d2c9fbe1af2f2f828a808bb6462da7de2e6cca2959f02de" exitCode=0 Jan 26 13:19:57 crc kubenswrapper[4844]: I0126 13:19:57.802287 4844 generic.go:334] "Generic (PLEG): container finished" podID="ad438e4d-9282-48b8-88c1-1f974bb26b5e" containerID="3a3bf17791c32d5fb5b785576ef455a7cc2d45fedf3ba47cb171731a20b10664" exitCode=2 Jan 26 13:19:57 crc kubenswrapper[4844]: I0126 13:19:57.802296 4844 generic.go:334] "Generic (PLEG): container finished" podID="ad438e4d-9282-48b8-88c1-1f974bb26b5e" containerID="282ef0f047b2f4b694df966e27dbe553b91659664164f94cf8c45a10a3267d7f" exitCode=0 Jan 26 13:19:57 crc kubenswrapper[4844]: I0126 13:19:57.802341 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad438e4d-9282-48b8-88c1-1f974bb26b5e","Type":"ContainerDied","Data":"ad476bc1efb544be8d2c9fbe1af2f2f828a808bb6462da7de2e6cca2959f02de"} Jan 26 13:19:57 crc kubenswrapper[4844]: I0126 13:19:57.802366 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad438e4d-9282-48b8-88c1-1f974bb26b5e","Type":"ContainerDied","Data":"3a3bf17791c32d5fb5b785576ef455a7cc2d45fedf3ba47cb171731a20b10664"} Jan 26 13:19:57 crc kubenswrapper[4844]: I0126 13:19:57.802376 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad438e4d-9282-48b8-88c1-1f974bb26b5e","Type":"ContainerDied","Data":"282ef0f047b2f4b694df966e27dbe553b91659664164f94cf8c45a10a3267d7f"} Jan 26 13:19:57 crc kubenswrapper[4844]: I0126 13:19:57.820955 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6666d497b6-ksrz2" event={"ID":"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5","Type":"ContainerStarted","Data":"97cf56503516e458be0772937f512301e87641d6a056953eb68a1e6f1d435a5f"} Jan 26 13:19:57 crc kubenswrapper[4844]: I0126 13:19:57.821338 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:57 crc kubenswrapper[4844]: I0126 13:19:57.821474 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:19:57 crc kubenswrapper[4844]: I0126 13:19:57.831817 4844 generic.go:334] "Generic (PLEG): container finished" podID="4bdef7de-9499-45b9-b41e-a59882aa4423" containerID="e46349bcce0b54334384e3d03bad2749ab306c1b6ca6446909a73481cb61b1fe" exitCode=0 Jan 26 13:19:57 crc kubenswrapper[4844]: I0126 13:19:57.831902 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-q74n8" event={"ID":"4bdef7de-9499-45b9-b41e-a59882aa4423","Type":"ContainerDied","Data":"e46349bcce0b54334384e3d03bad2749ab306c1b6ca6446909a73481cb61b1fe"} Jan 26 13:19:57 crc kubenswrapper[4844]: I0126 13:19:57.838261 4844 generic.go:334] "Generic (PLEG): container finished" podID="0b53d3b2-56e9-427c-8dcd-e5487cecc4f9" containerID="9bdda0f2b4232779dc7c4dc8a126055439e68d05d737b326c6bcb69cd3f3a1b2" exitCode=0 Jan 26 13:19:57 crc kubenswrapper[4844]: I0126 13:19:57.838334 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nw6hp" event={"ID":"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9","Type":"ContainerDied","Data":"9bdda0f2b4232779dc7c4dc8a126055439e68d05d737b326c6bcb69cd3f3a1b2"} Jan 26 13:19:57 crc kubenswrapper[4844]: I0126 13:19:57.849763 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6666d497b6-ksrz2" podStartSLOduration=9.849745644 podStartE2EDuration="9.849745644s" podCreationTimestamp="2026-01-26 13:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:19:57.843830971 +0000 UTC m=+2174.777198593" watchObservedRunningTime="2026-01-26 13:19:57.849745644 +0000 UTC m=+2174.783113276" Jan 26 13:19:57 crc kubenswrapper[4844]: I0126 13:19:57.868612 4844 generic.go:334] "Generic (PLEG): container finished" podID="bd10a394-bca1-4dd2-9441-2c9d4919f35e" containerID="c56c449ad3f94eff685870cecbbd7272d7014bab109f6ec20ff7461beea95137" exitCode=0 Jan 26 13:19:57 crc kubenswrapper[4844]: I0126 13:19:57.869718 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" event={"ID":"bd10a394-bca1-4dd2-9441-2c9d4919f35e","Type":"ContainerDied","Data":"c56c449ad3f94eff685870cecbbd7272d7014bab109f6ec20ff7461beea95137"} Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.436271 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.505294 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.643352 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-combined-ca-bundle\") pod \"5f82260f-cde4-4197-8718-d7adebadeddb\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.643472 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-scripts\") pod \"5f82260f-cde4-4197-8718-d7adebadeddb\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.643560 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f82260f-cde4-4197-8718-d7adebadeddb-etc-machine-id\") pod \"5f82260f-cde4-4197-8718-d7adebadeddb\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.643630 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-config-data\") pod \"5f82260f-cde4-4197-8718-d7adebadeddb\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.643655 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-db-sync-config-data\") pod \"5f82260f-cde4-4197-8718-d7adebadeddb\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.643811 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4pp4\" (UniqueName: \"kubernetes.io/projected/5f82260f-cde4-4197-8718-d7adebadeddb-kube-api-access-l4pp4\") pod \"5f82260f-cde4-4197-8718-d7adebadeddb\" (UID: \"5f82260f-cde4-4197-8718-d7adebadeddb\") " Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.645152 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f82260f-cde4-4197-8718-d7adebadeddb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5f82260f-cde4-4197-8718-d7adebadeddb" (UID: "5f82260f-cde4-4197-8718-d7adebadeddb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.649174 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-scripts" (OuterVolumeSpecName: "scripts") pod "5f82260f-cde4-4197-8718-d7adebadeddb" (UID: "5f82260f-cde4-4197-8718-d7adebadeddb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.649572 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5f82260f-cde4-4197-8718-d7adebadeddb" (UID: "5f82260f-cde4-4197-8718-d7adebadeddb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.650953 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f82260f-cde4-4197-8718-d7adebadeddb-kube-api-access-l4pp4" (OuterVolumeSpecName: "kube-api-access-l4pp4") pod "5f82260f-cde4-4197-8718-d7adebadeddb" (UID: "5f82260f-cde4-4197-8718-d7adebadeddb"). InnerVolumeSpecName "kube-api-access-l4pp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.675246 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f82260f-cde4-4197-8718-d7adebadeddb" (UID: "5f82260f-cde4-4197-8718-d7adebadeddb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.719386 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.721740 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-config-data" (OuterVolumeSpecName: "config-data") pod "5f82260f-cde4-4197-8718-d7adebadeddb" (UID: "5f82260f-cde4-4197-8718-d7adebadeddb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.748483 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.748509 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.748517 4844 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f82260f-cde4-4197-8718-d7adebadeddb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.748526 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.748533 4844 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f82260f-cde4-4197-8718-d7adebadeddb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.748541 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4pp4\" (UniqueName: \"kubernetes.io/projected/5f82260f-cde4-4197-8718-d7adebadeddb-kube-api-access-l4pp4\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.850222 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad438e4d-9282-48b8-88c1-1f974bb26b5e-run-httpd\") pod \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.850287 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-combined-ca-bundle\") pod \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.850359 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jjpz\" (UniqueName: \"kubernetes.io/projected/ad438e4d-9282-48b8-88c1-1f974bb26b5e-kube-api-access-5jjpz\") pod \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.850387 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-config-data\") pod \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.850434 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad438e4d-9282-48b8-88c1-1f974bb26b5e-log-httpd\") pod \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.850770 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-sg-core-conf-yaml\") pod \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.850812 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-scripts\") pod \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\" (UID: \"ad438e4d-9282-48b8-88c1-1f974bb26b5e\") " Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.852130 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad438e4d-9282-48b8-88c1-1f974bb26b5e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ad438e4d-9282-48b8-88c1-1f974bb26b5e" (UID: "ad438e4d-9282-48b8-88c1-1f974bb26b5e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.852914 4844 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad438e4d-9282-48b8-88c1-1f974bb26b5e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.855105 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad438e4d-9282-48b8-88c1-1f974bb26b5e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ad438e4d-9282-48b8-88c1-1f974bb26b5e" (UID: "ad438e4d-9282-48b8-88c1-1f974bb26b5e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.856797 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad438e4d-9282-48b8-88c1-1f974bb26b5e-kube-api-access-5jjpz" (OuterVolumeSpecName: "kube-api-access-5jjpz") pod "ad438e4d-9282-48b8-88c1-1f974bb26b5e" (UID: "ad438e4d-9282-48b8-88c1-1f974bb26b5e"). InnerVolumeSpecName "kube-api-access-5jjpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.856839 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-scripts" (OuterVolumeSpecName: "scripts") pod "ad438e4d-9282-48b8-88c1-1f974bb26b5e" (UID: "ad438e4d-9282-48b8-88c1-1f974bb26b5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.897825 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ad438e4d-9282-48b8-88c1-1f974bb26b5e" (UID: "ad438e4d-9282-48b8-88c1-1f974bb26b5e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.900014 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5757498f95-q5d7h" event={"ID":"f64e9d9a-09d6-4843-a829-d4fbdcaadb65","Type":"ContainerStarted","Data":"e7a8cd71e083ebb142036f892befe2fbdca6f820dadc0179c61ad526b987d912"} Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.901095 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dcfgm" event={"ID":"5f82260f-cde4-4197-8718-d7adebadeddb","Type":"ContainerDied","Data":"4295f350c902c1c377adffaff456a405eaa6f667d3beb33a96f63233a55ae5d6"} Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.901121 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4295f350c902c1c377adffaff456a405eaa6f667d3beb33a96f63233a55ae5d6" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.901167 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dcfgm" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.902456 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" event={"ID":"56958656-f467-485d-a3b6-9ecacb7edfeb","Type":"ContainerStarted","Data":"3bb23ca17eb1906520c8ccea434571ebb4f066aed829ac51cff41b34ac15a8e2"} Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.912249 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" event={"ID":"bd10a394-bca1-4dd2-9441-2c9d4919f35e","Type":"ContainerStarted","Data":"84aaf94f056025f15b8f7c6e6a2f64455652755ede0f8f95ce1fed45f910534e"} Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.912609 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.915283 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58b8c47bc6-5s5z9" event={"ID":"7f2cf574-1917-4f2b-adba-02bcf6cb4dc8","Type":"ContainerStarted","Data":"37caddf58761ddedc6d4b761e0234ea2e1254e008d61139abb82b78275559912"} Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.915649 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.915694 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.944146 4844 generic.go:334] "Generic (PLEG): container finished" podID="ce0ed764-c6f0-4580-89dd-4f6826df258d" containerID="61e9961bff931182a8012ad8856adbf430f38dc7f5ddea2b78bd38ec3bc96a2b" exitCode=0 Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.944280 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9jq8s" event={"ID":"ce0ed764-c6f0-4580-89dd-4f6826df258d","Type":"ContainerDied","Data":"61e9961bff931182a8012ad8856adbf430f38dc7f5ddea2b78bd38ec3bc96a2b"} Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.944383 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" podStartSLOduration=10.944366697 podStartE2EDuration="10.944366697s" podCreationTimestamp="2026-01-26 13:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:19:58.944290835 +0000 UTC m=+2175.877658447" watchObservedRunningTime="2026-01-26 13:19:58.944366697 +0000 UTC m=+2175.877734309" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.956708 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad438e4d-9282-48b8-88c1-1f974bb26b5e" (UID: "ad438e4d-9282-48b8-88c1-1f974bb26b5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.958751 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jjpz\" (UniqueName: \"kubernetes.io/projected/ad438e4d-9282-48b8-88c1-1f974bb26b5e-kube-api-access-5jjpz\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.958771 4844 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.958781 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.958792 4844 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad438e4d-9282-48b8-88c1-1f974bb26b5e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.958801 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.971501 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.972146 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad438e4d-9282-48b8-88c1-1f974bb26b5e","Type":"ContainerDied","Data":"7aabdc5d49ef87406650e65bcacb949345daafa854c88fa8e3e3622a43829aa8"} Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.972198 4844 scope.go:117] "RemoveContainer" containerID="ad476bc1efb544be8d2c9fbe1af2f2f828a808bb6462da7de2e6cca2959f02de" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.990835 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.990940 4844 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 13:19:58 crc kubenswrapper[4844]: I0126 13:19:58.991518 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.002528 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-58b8c47bc6-5s5z9" podStartSLOduration=8.002508483 podStartE2EDuration="8.002508483s" podCreationTimestamp="2026-01-26 13:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:19:58.987418748 +0000 UTC m=+2175.920786360" watchObservedRunningTime="2026-01-26 13:19:59.002508483 +0000 UTC m=+2175.935876105" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.034375 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-config-data" (OuterVolumeSpecName: "config-data") pod "ad438e4d-9282-48b8-88c1-1f974bb26b5e" (UID: "ad438e4d-9282-48b8-88c1-1f974bb26b5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.061076 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad438e4d-9282-48b8-88c1-1f974bb26b5e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.074671 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 13:19:59 crc kubenswrapper[4844]: E0126 13:19:59.075081 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad438e4d-9282-48b8-88c1-1f974bb26b5e" containerName="proxy-httpd" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.075094 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad438e4d-9282-48b8-88c1-1f974bb26b5e" containerName="proxy-httpd" Jan 26 13:19:59 crc kubenswrapper[4844]: E0126 13:19:59.075110 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad438e4d-9282-48b8-88c1-1f974bb26b5e" containerName="ceilometer-notification-agent" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.075116 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad438e4d-9282-48b8-88c1-1f974bb26b5e" containerName="ceilometer-notification-agent" Jan 26 13:19:59 crc kubenswrapper[4844]: E0126 13:19:59.075126 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad438e4d-9282-48b8-88c1-1f974bb26b5e" containerName="sg-core" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.075133 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad438e4d-9282-48b8-88c1-1f974bb26b5e" containerName="sg-core" Jan 26 13:19:59 crc kubenswrapper[4844]: E0126 13:19:59.075166 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f82260f-cde4-4197-8718-d7adebadeddb" containerName="cinder-db-sync" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.075172 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f82260f-cde4-4197-8718-d7adebadeddb" containerName="cinder-db-sync" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.075341 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad438e4d-9282-48b8-88c1-1f974bb26b5e" containerName="proxy-httpd" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.075354 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f82260f-cde4-4197-8718-d7adebadeddb" containerName="cinder-db-sync" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.075364 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad438e4d-9282-48b8-88c1-1f974bb26b5e" containerName="sg-core" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.075374 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad438e4d-9282-48b8-88c1-1f974bb26b5e" containerName="ceilometer-notification-agent" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.076372 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.081025 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.081201 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-swxz2" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.082078 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.084341 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.112442 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.155422 4844 scope.go:117] "RemoveContainer" containerID="3a3bf17791c32d5fb5b785576ef455a7cc2d45fedf3ba47cb171731a20b10664" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.209649 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf"] Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.233793 4844 scope.go:117] "RemoveContainer" containerID="282ef0f047b2f4b694df966e27dbe553b91659664164f94cf8c45a10a3267d7f" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.247398 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58cb66d699-gww5n"] Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.249226 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.266416 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.266476 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-scripts\") pod \"cinder-scheduler-0\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.266494 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-config-data\") pod \"cinder-scheduler-0\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.266549 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-284bj\" (UniqueName: \"kubernetes.io/projected/b28528a5-6d16-4775-89eb-5f0e00b4afd1-kube-api-access-284bj\") pod \"cinder-scheduler-0\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.266614 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.266650 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b28528a5-6d16-4775-89eb-5f0e00b4afd1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.270354 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58cb66d699-gww5n"] Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.369684 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-dns-svc\") pod \"dnsmasq-dns-58cb66d699-gww5n\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.369760 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-284bj\" (UniqueName: \"kubernetes.io/projected/b28528a5-6d16-4775-89eb-5f0e00b4afd1-kube-api-access-284bj\") pod \"cinder-scheduler-0\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.369799 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-ovsdbserver-sb\") pod \"dnsmasq-dns-58cb66d699-gww5n\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.369831 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-dns-swift-storage-0\") pod \"dnsmasq-dns-58cb66d699-gww5n\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.369855 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.369896 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b28528a5-6d16-4775-89eb-5f0e00b4afd1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.369926 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-config\") pod \"dnsmasq-dns-58cb66d699-gww5n\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.369947 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.369977 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-ovsdbserver-nb\") pod \"dnsmasq-dns-58cb66d699-gww5n\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.369991 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7tqw\" (UniqueName: \"kubernetes.io/projected/9db17c7b-8322-4488-a47b-e68d66597d6d-kube-api-access-t7tqw\") pod \"dnsmasq-dns-58cb66d699-gww5n\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.370027 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-scripts\") pod \"cinder-scheduler-0\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.370044 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-config-data\") pod \"cinder-scheduler-0\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.370811 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b28528a5-6d16-4775-89eb-5f0e00b4afd1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.387334 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-scripts\") pod \"cinder-scheduler-0\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.387397 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.388954 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.392966 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.393152 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.403658 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-config-data\") pod \"cinder-scheduler-0\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.442223 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-284bj\" (UniqueName: \"kubernetes.io/projected/b28528a5-6d16-4775-89eb-5f0e00b4afd1-kube-api-access-284bj\") pod \"cinder-scheduler-0\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.442755 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.453776 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.477102 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-config-data-custom\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.477184 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-scripts\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.477215 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.477247 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-logs\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.477281 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-ovsdbserver-sb\") pod \"dnsmasq-dns-58cb66d699-gww5n\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.477319 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-dns-swift-storage-0\") pod \"dnsmasq-dns-58cb66d699-gww5n\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.477394 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-config\") pod \"dnsmasq-dns-58cb66d699-gww5n\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.477430 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-ovsdbserver-nb\") pod \"dnsmasq-dns-58cb66d699-gww5n\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.477452 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7tqw\" (UniqueName: \"kubernetes.io/projected/9db17c7b-8322-4488-a47b-e68d66597d6d-kube-api-access-t7tqw\") pod \"dnsmasq-dns-58cb66d699-gww5n\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.485753 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcjns\" (UniqueName: \"kubernetes.io/projected/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-kube-api-access-xcjns\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.485870 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-config-data\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.485891 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.485930 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-dns-svc\") pod \"dnsmasq-dns-58cb66d699-gww5n\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.489342 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-ovsdbserver-sb\") pod \"dnsmasq-dns-58cb66d699-gww5n\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.489865 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-dns-swift-storage-0\") pod \"dnsmasq-dns-58cb66d699-gww5n\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.490346 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-config\") pod \"dnsmasq-dns-58cb66d699-gww5n\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.490846 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-ovsdbserver-nb\") pod \"dnsmasq-dns-58cb66d699-gww5n\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.493200 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-dns-svc\") pod \"dnsmasq-dns-58cb66d699-gww5n\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.502730 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.522370 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7tqw\" (UniqueName: \"kubernetes.io/projected/9db17c7b-8322-4488-a47b-e68d66597d6d-kube-api-access-t7tqw\") pod \"dnsmasq-dns-58cb66d699-gww5n\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.557194 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.588814 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.588881 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-logs\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.589062 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcjns\" (UniqueName: \"kubernetes.io/projected/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-kube-api-access-xcjns\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.589127 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-config-data\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.589169 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.589216 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-config-data-custom\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.589255 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-scripts\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.591500 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.592097 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-logs\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.601506 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.602842 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-scripts\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.604095 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-config-data-custom\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.605296 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-config-data\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.617139 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcjns\" (UniqueName: \"kubernetes.io/projected/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-kube-api-access-xcjns\") pod \"cinder-api-0\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.620933 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.627018 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.633221 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.642199 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.643121 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.648712 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.712900 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.750369 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.778953 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-q74n8" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.797645 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/388147f6-5b13-4111-9d1f-fe317038852d-log-httpd\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.797747 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-scripts\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.797769 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-config-data\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.797801 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/388147f6-5b13-4111-9d1f-fe317038852d-run-httpd\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.797845 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.797863 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6st5\" (UniqueName: \"kubernetes.io/projected/388147f6-5b13-4111-9d1f-fe317038852d-kube-api-access-t6st5\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.797886 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.901256 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bdef7de-9499-45b9-b41e-a59882aa4423-config\") pod \"4bdef7de-9499-45b9-b41e-a59882aa4423\" (UID: \"4bdef7de-9499-45b9-b41e-a59882aa4423\") " Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.901326 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bdef7de-9499-45b9-b41e-a59882aa4423-combined-ca-bundle\") pod \"4bdef7de-9499-45b9-b41e-a59882aa4423\" (UID: \"4bdef7de-9499-45b9-b41e-a59882aa4423\") " Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.901480 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-622zd\" (UniqueName: \"kubernetes.io/projected/4bdef7de-9499-45b9-b41e-a59882aa4423-kube-api-access-622zd\") pod \"4bdef7de-9499-45b9-b41e-a59882aa4423\" (UID: \"4bdef7de-9499-45b9-b41e-a59882aa4423\") " Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.901937 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.901988 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6st5\" (UniqueName: \"kubernetes.io/projected/388147f6-5b13-4111-9d1f-fe317038852d-kube-api-access-t6st5\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.902026 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.902116 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/388147f6-5b13-4111-9d1f-fe317038852d-log-httpd\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.902221 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-scripts\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.902249 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-config-data\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.902323 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/388147f6-5b13-4111-9d1f-fe317038852d-run-httpd\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.906217 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/388147f6-5b13-4111-9d1f-fe317038852d-run-httpd\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.913336 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/388147f6-5b13-4111-9d1f-fe317038852d-log-httpd\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.934211 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.938556 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-config-data\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.945529 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.945688 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bdef7de-9499-45b9-b41e-a59882aa4423-kube-api-access-622zd" (OuterVolumeSpecName: "kube-api-access-622zd") pod "4bdef7de-9499-45b9-b41e-a59882aa4423" (UID: "4bdef7de-9499-45b9-b41e-a59882aa4423"). InnerVolumeSpecName "kube-api-access-622zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.946013 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-scripts\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:19:59 crc kubenswrapper[4844]: I0126 13:19:59.948787 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6st5\" (UniqueName: \"kubernetes.io/projected/388147f6-5b13-4111-9d1f-fe317038852d-kube-api-access-t6st5\") pod \"ceilometer-0\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " pod="openstack/ceilometer-0" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.025703 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bdef7de-9499-45b9-b41e-a59882aa4423-config" (OuterVolumeSpecName: "config") pod "4bdef7de-9499-45b9-b41e-a59882aa4423" (UID: "4bdef7de-9499-45b9-b41e-a59882aa4423"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.063792 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-622zd\" (UniqueName: \"kubernetes.io/projected/4bdef7de-9499-45b9-b41e-a59882aa4423-kube-api-access-622zd\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.063823 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bdef7de-9499-45b9-b41e-a59882aa4423-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.075095 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/watcher-api-0" podUID="33ecc4c6-320a-41d8-a7c2-608bdda02b0a" containerName="watcher-api-log" probeResult="failure" output="Get \"https://10.217.0.167:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.089387 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.113238 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5757498f95-q5d7h" event={"ID":"f64e9d9a-09d6-4843-a829-d4fbdcaadb65","Type":"ContainerStarted","Data":"d2fdc123ad65992c60a24c0f6f1bec063b21cebea1d2b6a5d80efe828c11942b"} Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.124721 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-q74n8" event={"ID":"4bdef7de-9499-45b9-b41e-a59882aa4423","Type":"ContainerDied","Data":"9c6dc9b7f0467e4777f9265925fd4cfabe838b26c5879cc8d43ae8f0a5d4a2ac"} Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.124761 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c6dc9b7f0467e4777f9265925fd4cfabe838b26c5879cc8d43ae8f0a5d4a2ac" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.124851 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-q74n8" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.129115 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58cb66d699-gww5n"] Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.145030 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b9b459575-mvgv5"] Jan 26 13:20:00 crc kubenswrapper[4844]: E0126 13:20:00.145405 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bdef7de-9499-45b9-b41e-a59882aa4423" containerName="neutron-db-sync" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.145419 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bdef7de-9499-45b9-b41e-a59882aa4423" containerName="neutron-db-sync" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.145620 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bdef7de-9499-45b9-b41e-a59882aa4423" containerName="neutron-db-sync" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.146539 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.154440 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bdef7de-9499-45b9-b41e-a59882aa4423-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bdef7de-9499-45b9-b41e-a59882aa4423" (UID: "4bdef7de-9499-45b9-b41e-a59882aa4423"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.166960 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bdef7de-9499-45b9-b41e-a59882aa4423-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.171063 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" event={"ID":"56958656-f467-485d-a3b6-9ecacb7edfeb","Type":"ContainerStarted","Data":"fd42cd3cfaa9f05eed6af9468316e6154b557276954dee2ecb6c96db0038a3bd"} Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.183431 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b9b459575-mvgv5"] Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.184176 4844 generic.go:334] "Generic (PLEG): container finished" podID="0b53d3b2-56e9-427c-8dcd-e5487cecc4f9" containerID="b5e373fffb5472440ecabbaaffb0660f5b1e6bfe3a354b171f3436e9f8a16ba5" exitCode=0 Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.184222 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nw6hp" event={"ID":"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9","Type":"ContainerDied","Data":"b5e373fffb5472440ecabbaaffb0660f5b1e6bfe3a354b171f3436e9f8a16ba5"} Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.188566 4844 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.223477 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5757498f95-q5d7h" podStartSLOduration=9.805890383 podStartE2EDuration="12.2234393s" podCreationTimestamp="2026-01-26 13:19:48 +0000 UTC" firstStartedPulling="2026-01-26 13:19:55.954548066 +0000 UTC m=+2172.887915678" lastFinishedPulling="2026-01-26 13:19:58.372096983 +0000 UTC m=+2175.305464595" observedRunningTime="2026-01-26 13:20:00.154014461 +0000 UTC m=+2177.087382073" watchObservedRunningTime="2026-01-26 13:20:00.2234393 +0000 UTC m=+2177.156806912" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.224974 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bb4bbcbbd-hnxlf"] Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.226577 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.249008 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.253661 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bb4bbcbbd-hnxlf"] Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.277655 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-ovsdbserver-nb\") pod \"dnsmasq-dns-7b9b459575-mvgv5\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.277767 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-config\") pod \"neutron-bb4bbcbbd-hnxlf\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.277856 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9b459575-mvgv5\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.277903 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-combined-ca-bundle\") pod \"neutron-bb4bbcbbd-hnxlf\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.277943 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-dns-svc\") pod \"dnsmasq-dns-7b9b459575-mvgv5\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.277969 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-dns-swift-storage-0\") pod \"dnsmasq-dns-7b9b459575-mvgv5\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.277990 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6gq5\" (UniqueName: \"kubernetes.io/projected/374b2eea-9304-4d29-8cbf-7c0702f2fce8-kube-api-access-s6gq5\") pod \"dnsmasq-dns-7b9b459575-mvgv5\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.278010 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-httpd-config\") pod \"neutron-bb4bbcbbd-hnxlf\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.278051 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-ovndb-tls-certs\") pod \"neutron-bb4bbcbbd-hnxlf\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.278073 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvbvf\" (UniqueName: \"kubernetes.io/projected/013c2624-05ec-49ef-85e2-5f5e155ee687-kube-api-access-fvbvf\") pod \"neutron-bb4bbcbbd-hnxlf\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.278119 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-config\") pod \"dnsmasq-dns-7b9b459575-mvgv5\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.336464 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-688b4ff97d-t5mvg" podStartSLOduration=10.340763534 podStartE2EDuration="12.336446502s" podCreationTimestamp="2026-01-26 13:19:48 +0000 UTC" firstStartedPulling="2026-01-26 13:19:56.376392544 +0000 UTC m=+2173.309760156" lastFinishedPulling="2026-01-26 13:19:58.372075512 +0000 UTC m=+2175.305443124" observedRunningTime="2026-01-26 13:20:00.284741762 +0000 UTC m=+2177.218109374" watchObservedRunningTime="2026-01-26 13:20:00.336446502 +0000 UTC m=+2177.269814114" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.383758 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-config\") pod \"neutron-bb4bbcbbd-hnxlf\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.384165 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9b459575-mvgv5\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.384224 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-combined-ca-bundle\") pod \"neutron-bb4bbcbbd-hnxlf\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.384260 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-dns-svc\") pod \"dnsmasq-dns-7b9b459575-mvgv5\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.384283 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-dns-swift-storage-0\") pod \"dnsmasq-dns-7b9b459575-mvgv5\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.384308 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6gq5\" (UniqueName: \"kubernetes.io/projected/374b2eea-9304-4d29-8cbf-7c0702f2fce8-kube-api-access-s6gq5\") pod \"dnsmasq-dns-7b9b459575-mvgv5\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.384331 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-httpd-config\") pod \"neutron-bb4bbcbbd-hnxlf\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.384363 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-ovndb-tls-certs\") pod \"neutron-bb4bbcbbd-hnxlf\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.384388 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvbvf\" (UniqueName: \"kubernetes.io/projected/013c2624-05ec-49ef-85e2-5f5e155ee687-kube-api-access-fvbvf\") pod \"neutron-bb4bbcbbd-hnxlf\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.384449 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-config\") pod \"dnsmasq-dns-7b9b459575-mvgv5\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.384520 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-ovsdbserver-nb\") pod \"dnsmasq-dns-7b9b459575-mvgv5\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.386196 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-ovsdbserver-nb\") pod \"dnsmasq-dns-7b9b459575-mvgv5\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.388003 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9b459575-mvgv5\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.388328 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-dns-swift-storage-0\") pod \"dnsmasq-dns-7b9b459575-mvgv5\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.390451 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-dns-svc\") pod \"dnsmasq-dns-7b9b459575-mvgv5\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.392265 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-config\") pod \"dnsmasq-dns-7b9b459575-mvgv5\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.394534 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-combined-ca-bundle\") pod \"neutron-bb4bbcbbd-hnxlf\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.418705 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-config\") pod \"neutron-bb4bbcbbd-hnxlf\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.418814 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.419204 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-ovndb-tls-certs\") pod \"neutron-bb4bbcbbd-hnxlf\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.422520 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-httpd-config\") pod \"neutron-bb4bbcbbd-hnxlf\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.427958 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvbvf\" (UniqueName: \"kubernetes.io/projected/013c2624-05ec-49ef-85e2-5f5e155ee687-kube-api-access-fvbvf\") pod \"neutron-bb4bbcbbd-hnxlf\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.444281 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6gq5\" (UniqueName: \"kubernetes.io/projected/374b2eea-9304-4d29-8cbf-7c0702f2fce8-kube-api-access-s6gq5\") pod \"dnsmasq-dns-7b9b459575-mvgv5\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.527059 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.606212 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58cb66d699-gww5n"] Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.606622 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:00 crc kubenswrapper[4844]: I0126 13:20:00.816286 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 13:20:00 crc kubenswrapper[4844]: W0126 13:20:00.885441 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9db17c7b_8322_4488_a47b_e68d66597d6d.slice/crio-aa758cd0ead73afd4a347f309011ba15bf42202abd9a03641919b8796ac14b47 WatchSource:0}: Error finding container aa758cd0ead73afd4a347f309011ba15bf42202abd9a03641919b8796ac14b47: Status 404 returned error can't find the container with id aa758cd0ead73afd4a347f309011ba15bf42202abd9a03641919b8796ac14b47 Jan 26 13:20:00 crc kubenswrapper[4844]: W0126 13:20:00.890053 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb28528a5_6d16_4775_89eb_5f0e00b4afd1.slice/crio-aa700d4a4bb2c55d72a39a9367a812b8e0f35bcd3e8692e49743697d7d1b7b4a WatchSource:0}: Error finding container aa700d4a4bb2c55d72a39a9367a812b8e0f35bcd3e8692e49743697d7d1b7b4a: Status 404 returned error can't find the container with id aa700d4a4bb2c55d72a39a9367a812b8e0f35bcd3e8692e49743697d7d1b7b4a Jan 26 13:20:01 crc kubenswrapper[4844]: I0126 13:20:00.996785 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 13:20:01 crc kubenswrapper[4844]: W0126 13:20:01.025286 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2ba6a95_767f_4589_8dc9_e124e9be4fb4.slice/crio-90fdc14f94b3ca76fec2faca7e2ed23b2a7ce47c6c2d8e140256ea69e8892a5b WatchSource:0}: Error finding container 90fdc14f94b3ca76fec2faca7e2ed23b2a7ce47c6c2d8e140256ea69e8892a5b: Status 404 returned error can't find the container with id 90fdc14f94b3ca76fec2faca7e2ed23b2a7ce47c6c2d8e140256ea69e8892a5b Jan 26 13:20:01 crc kubenswrapper[4844]: I0126 13:20:01.136945 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:20:01 crc kubenswrapper[4844]: W0126 13:20:01.172414 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod388147f6_5b13_4111_9d1f_fe317038852d.slice/crio-ead8dd568acb56fbe1ac9a2fca2c811eb7df1382bd012a4c178aeb1a84b46908 WatchSource:0}: Error finding container ead8dd568acb56fbe1ac9a2fca2c811eb7df1382bd012a4c178aeb1a84b46908: Status 404 returned error can't find the container with id ead8dd568acb56fbe1ac9a2fca2c811eb7df1382bd012a4c178aeb1a84b46908 Jan 26 13:20:01 crc kubenswrapper[4844]: I0126 13:20:01.265976 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 26 13:20:01 crc kubenswrapper[4844]: I0126 13:20:01.266013 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d2ba6a95-767f-4589-8dc9-e124e9be4fb4","Type":"ContainerStarted","Data":"90fdc14f94b3ca76fec2faca7e2ed23b2a7ce47c6c2d8e140256ea69e8892a5b"} Jan 26 13:20:01 crc kubenswrapper[4844]: I0126 13:20:01.279823 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"388147f6-5b13-4111-9d1f-fe317038852d","Type":"ContainerStarted","Data":"ead8dd568acb56fbe1ac9a2fca2c811eb7df1382bd012a4c178aeb1a84b46908"} Jan 26 13:20:01 crc kubenswrapper[4844]: I0126 13:20:01.286581 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b28528a5-6d16-4775-89eb-5f0e00b4afd1","Type":"ContainerStarted","Data":"aa700d4a4bb2c55d72a39a9367a812b8e0f35bcd3e8692e49743697d7d1b7b4a"} Jan 26 13:20:01 crc kubenswrapper[4844]: I0126 13:20:01.309761 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58cb66d699-gww5n" event={"ID":"9db17c7b-8322-4488-a47b-e68d66597d6d","Type":"ContainerStarted","Data":"aa758cd0ead73afd4a347f309011ba15bf42202abd9a03641919b8796ac14b47"} Jan 26 13:20:01 crc kubenswrapper[4844]: I0126 13:20:01.309976 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" podUID="bd10a394-bca1-4dd2-9441-2c9d4919f35e" containerName="dnsmasq-dns" containerID="cri-o://84aaf94f056025f15b8f7c6e6a2f64455652755ede0f8f95ce1fed45f910534e" gracePeriod=10 Jan 26 13:20:01 crc kubenswrapper[4844]: I0126 13:20:01.349458 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad438e4d-9282-48b8-88c1-1f974bb26b5e" path="/var/lib/kubelet/pods/ad438e4d-9282-48b8-88c1-1f974bb26b5e/volumes" Jan 26 13:20:01 crc kubenswrapper[4844]: I0126 13:20:01.442469 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b9b459575-mvgv5"] Jan 26 13:20:01 crc kubenswrapper[4844]: I0126 13:20:01.728640 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bb4bbcbbd-hnxlf"] Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.016433 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.017486 4844 scope.go:117] "RemoveContainer" containerID="920a38a2c1e0977cbdcbd5e4c3757be17293c805c1c55b4e7ee718455c1317a2" Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.017876 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.083273 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9jq8s" Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.123922 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-77c8bf8786-w82f7" Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.170990 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg54t\" (UniqueName: \"kubernetes.io/projected/ce0ed764-c6f0-4580-89dd-4f6826df258d-kube-api-access-xg54t\") pod \"ce0ed764-c6f0-4580-89dd-4f6826df258d\" (UID: \"ce0ed764-c6f0-4580-89dd-4f6826df258d\") " Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.171077 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce0ed764-c6f0-4580-89dd-4f6826df258d-db-sync-config-data\") pod \"ce0ed764-c6f0-4580-89dd-4f6826df258d\" (UID: \"ce0ed764-c6f0-4580-89dd-4f6826df258d\") " Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.171176 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0ed764-c6f0-4580-89dd-4f6826df258d-combined-ca-bundle\") pod \"ce0ed764-c6f0-4580-89dd-4f6826df258d\" (UID: \"ce0ed764-c6f0-4580-89dd-4f6826df258d\") " Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.171223 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0ed764-c6f0-4580-89dd-4f6826df258d-config-data\") pod \"ce0ed764-c6f0-4580-89dd-4f6826df258d\" (UID: \"ce0ed764-c6f0-4580-89dd-4f6826df258d\") " Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.196811 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce0ed764-c6f0-4580-89dd-4f6826df258d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ce0ed764-c6f0-4580-89dd-4f6826df258d" (UID: "ce0ed764-c6f0-4580-89dd-4f6826df258d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.205143 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce0ed764-c6f0-4580-89dd-4f6826df258d-kube-api-access-xg54t" (OuterVolumeSpecName: "kube-api-access-xg54t") pod "ce0ed764-c6f0-4580-89dd-4f6826df258d" (UID: "ce0ed764-c6f0-4580-89dd-4f6826df258d"). InnerVolumeSpecName "kube-api-access-xg54t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.297204 4844 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce0ed764-c6f0-4580-89dd-4f6826df258d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.297444 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg54t\" (UniqueName: \"kubernetes.io/projected/ce0ed764-c6f0-4580-89dd-4f6826df258d-kube-api-access-xg54t\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.368963 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.386755 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb4bbcbbd-hnxlf" event={"ID":"013c2624-05ec-49ef-85e2-5f5e155ee687","Type":"ContainerStarted","Data":"eaad95c642169e35ebde226ca77e36758de2c55054c58a78dd59703d93a31192"} Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.392820 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f984df9c6-m8lct"] Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.393029 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f984df9c6-m8lct" podUID="2f336c66-c9c1-4764-8f55-a6fd70f01790" containerName="horizon-log" containerID="cri-o://b4a28fc027238c2c642ef160a8fb190c22d5b2b5a5c62897b96d66146b947b9e" gracePeriod=30 Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.393422 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f984df9c6-m8lct" podUID="2f336c66-c9c1-4764-8f55-a6fd70f01790" containerName="horizon" containerID="cri-o://c6ebce027282a49648d65f221d8df430e516930ebe722a6821d99749d3838a00" gracePeriod=30 Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.428009 4844 generic.go:334] "Generic (PLEG): container finished" podID="9db17c7b-8322-4488-a47b-e68d66597d6d" containerID="f38018ff03507f429501bf3afe69203dcd3fb680758111f73ab4f2823fd550be" exitCode=0 Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.428096 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58cb66d699-gww5n" event={"ID":"9db17c7b-8322-4488-a47b-e68d66597d6d","Type":"ContainerDied","Data":"f38018ff03507f429501bf3afe69203dcd3fb680758111f73ab4f2823fd550be"} Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.447342 4844 generic.go:334] "Generic (PLEG): container finished" podID="bd10a394-bca1-4dd2-9441-2c9d4919f35e" containerID="84aaf94f056025f15b8f7c6e6a2f64455652755ede0f8f95ce1fed45f910534e" exitCode=0 Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.447426 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" event={"ID":"bd10a394-bca1-4dd2-9441-2c9d4919f35e","Type":"ContainerDied","Data":"84aaf94f056025f15b8f7c6e6a2f64455652755ede0f8f95ce1fed45f910534e"} Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.469903 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce0ed764-c6f0-4580-89dd-4f6826df258d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce0ed764-c6f0-4580-89dd-4f6826df258d" (UID: "ce0ed764-c6f0-4580-89dd-4f6826df258d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.490667 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9jq8s" event={"ID":"ce0ed764-c6f0-4580-89dd-4f6826df258d","Type":"ContainerDied","Data":"c71c1d5dd7cc2a0189d7a738b3f5cf92ab74c0e569b5ae8130fd66cef0e77048"} Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.490701 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c71c1d5dd7cc2a0189d7a738b3f5cf92ab74c0e569b5ae8130fd66cef0e77048" Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.490763 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9jq8s" Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.501526 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0ed764-c6f0-4580-89dd-4f6826df258d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.556972 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce0ed764-c6f0-4580-89dd-4f6826df258d-config-data" (OuterVolumeSpecName: "config-data") pod "ce0ed764-c6f0-4580-89dd-4f6826df258d" (UID: "ce0ed764-c6f0-4580-89dd-4f6826df258d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.557210 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" event={"ID":"374b2eea-9304-4d29-8cbf-7c0702f2fce8","Type":"ContainerStarted","Data":"d96399e6ef833aeb67c3ca66dd809ab4b292b4a174fc235b1f7672581897845a"} Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.605091 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0ed764-c6f0-4580-89dd-4f6826df258d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.873257 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.935572 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-dns-svc\") pod \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.936023 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq6xq\" (UniqueName: \"kubernetes.io/projected/bd10a394-bca1-4dd2-9441-2c9d4919f35e-kube-api-access-nq6xq\") pod \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.936098 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-config\") pod \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.936175 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-ovsdbserver-sb\") pod \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.936193 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-dns-swift-storage-0\") pod \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.936208 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-ovsdbserver-nb\") pod \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\" (UID: \"bd10a394-bca1-4dd2-9441-2c9d4919f35e\") " Jan 26 13:20:02 crc kubenswrapper[4844]: I0126 13:20:02.949700 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.001583 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd10a394-bca1-4dd2-9441-2c9d4919f35e-kube-api-access-nq6xq" (OuterVolumeSpecName: "kube-api-access-nq6xq") pod "bd10a394-bca1-4dd2-9441-2c9d4919f35e" (UID: "bd10a394-bca1-4dd2-9441-2c9d4919f35e"). InnerVolumeSpecName "kube-api-access-nq6xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.041324 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-dns-swift-storage-0\") pod \"9db17c7b-8322-4488-a47b-e68d66597d6d\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.042452 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7tqw\" (UniqueName: \"kubernetes.io/projected/9db17c7b-8322-4488-a47b-e68d66597d6d-kube-api-access-t7tqw\") pod \"9db17c7b-8322-4488-a47b-e68d66597d6d\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.042575 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-ovsdbserver-nb\") pod \"9db17c7b-8322-4488-a47b-e68d66597d6d\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.042800 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-ovsdbserver-sb\") pod \"9db17c7b-8322-4488-a47b-e68d66597d6d\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.042983 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-config\") pod \"9db17c7b-8322-4488-a47b-e68d66597d6d\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.043278 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-dns-svc\") pod \"9db17c7b-8322-4488-a47b-e68d66597d6d\" (UID: \"9db17c7b-8322-4488-a47b-e68d66597d6d\") " Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.053345 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq6xq\" (UniqueName: \"kubernetes.io/projected/bd10a394-bca1-4dd2-9441-2c9d4919f35e-kube-api-access-nq6xq\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.100662 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9db17c7b-8322-4488-a47b-e68d66597d6d" (UID: "9db17c7b-8322-4488-a47b-e68d66597d6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.156129 4844 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.186251 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9db17c7b-8322-4488-a47b-e68d66597d6d-kube-api-access-t7tqw" (OuterVolumeSpecName: "kube-api-access-t7tqw") pod "9db17c7b-8322-4488-a47b-e68d66597d6d" (UID: "9db17c7b-8322-4488-a47b-e68d66597d6d"). InnerVolumeSpecName "kube-api-access-t7tqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.270891 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7tqw\" (UniqueName: \"kubernetes.io/projected/9db17c7b-8322-4488-a47b-e68d66597d6d-kube-api-access-t7tqw\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.611547 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b9b459575-mvgv5"] Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.624057 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58cb66d699-gww5n" event={"ID":"9db17c7b-8322-4488-a47b-e68d66597d6d","Type":"ContainerDied","Data":"aa758cd0ead73afd4a347f309011ba15bf42202abd9a03641919b8796ac14b47"} Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.624098 4844 scope.go:117] "RemoveContainer" containerID="f38018ff03507f429501bf3afe69203dcd3fb680758111f73ab4f2823fd550be" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.624316 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58cb66d699-gww5n" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.661146 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-584dfd9675-8wzdw"] Jan 26 13:20:03 crc kubenswrapper[4844]: E0126 13:20:03.661639 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce0ed764-c6f0-4580-89dd-4f6826df258d" containerName="glance-db-sync" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.661659 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce0ed764-c6f0-4580-89dd-4f6826df258d" containerName="glance-db-sync" Jan 26 13:20:03 crc kubenswrapper[4844]: E0126 13:20:03.661669 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db17c7b-8322-4488-a47b-e68d66597d6d" containerName="init" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.661677 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db17c7b-8322-4488-a47b-e68d66597d6d" containerName="init" Jan 26 13:20:03 crc kubenswrapper[4844]: E0126 13:20:03.661695 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd10a394-bca1-4dd2-9441-2c9d4919f35e" containerName="init" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.661703 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd10a394-bca1-4dd2-9441-2c9d4919f35e" containerName="init" Jan 26 13:20:03 crc kubenswrapper[4844]: E0126 13:20:03.661711 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd10a394-bca1-4dd2-9441-2c9d4919f35e" containerName="dnsmasq-dns" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.661717 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd10a394-bca1-4dd2-9441-2c9d4919f35e" containerName="dnsmasq-dns" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.661911 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="9db17c7b-8322-4488-a47b-e68d66597d6d" containerName="init" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.661946 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce0ed764-c6f0-4580-89dd-4f6826df258d" containerName="glance-db-sync" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.661963 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd10a394-bca1-4dd2-9441-2c9d4919f35e" containerName="dnsmasq-dns" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.663076 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.667569 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nw6hp" event={"ID":"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9","Type":"ContainerStarted","Data":"bcf0c9a16d391de4b7cd2111db5acf9a272fc80e3f14b95cef9bee0159b8ed9d"} Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.693232 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" event={"ID":"bd10a394-bca1-4dd2-9441-2c9d4919f35e","Type":"ContainerDied","Data":"ca2a24afe192a425f33081ff426ec08801373acd48f628c9684282b255c10c04"} Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.693320 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.712440 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-584dfd9675-8wzdw"] Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.757046 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="f80a52fc-df6a-4218-913e-2ee03174e341" containerName="galera" probeResult="failure" output="command timed out" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.767246 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nw6hp" podStartSLOduration=5.586708742 podStartE2EDuration="8.767212253s" podCreationTimestamp="2026-01-26 13:19:55 +0000 UTC" firstStartedPulling="2026-01-26 13:19:58.229056754 +0000 UTC m=+2175.162424366" lastFinishedPulling="2026-01-26 13:20:01.409560265 +0000 UTC m=+2178.342927877" observedRunningTime="2026-01-26 13:20:03.70711523 +0000 UTC m=+2180.640482842" watchObservedRunningTime="2026-01-26 13:20:03.767212253 +0000 UTC m=+2180.700579875" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.792738 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-dns-svc\") pod \"dnsmasq-dns-584dfd9675-8wzdw\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.792771 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-ovsdbserver-sb\") pod \"dnsmasq-dns-584dfd9675-8wzdw\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.792792 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rwh4\" (UniqueName: \"kubernetes.io/projected/955c4df0-924d-439d-8a58-66f49e93cf44-kube-api-access-5rwh4\") pod \"dnsmasq-dns-584dfd9675-8wzdw\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.792818 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-dns-swift-storage-0\") pod \"dnsmasq-dns-584dfd9675-8wzdw\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.792854 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-config\") pod \"dnsmasq-dns-584dfd9675-8wzdw\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.792901 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-ovsdbserver-nb\") pod \"dnsmasq-dns-584dfd9675-8wzdw\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.894108 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-ovsdbserver-nb\") pod \"dnsmasq-dns-584dfd9675-8wzdw\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.894250 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-dns-svc\") pod \"dnsmasq-dns-584dfd9675-8wzdw\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.894268 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-ovsdbserver-sb\") pod \"dnsmasq-dns-584dfd9675-8wzdw\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.894289 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rwh4\" (UniqueName: \"kubernetes.io/projected/955c4df0-924d-439d-8a58-66f49e93cf44-kube-api-access-5rwh4\") pod \"dnsmasq-dns-584dfd9675-8wzdw\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.894314 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-dns-swift-storage-0\") pod \"dnsmasq-dns-584dfd9675-8wzdw\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.894352 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-config\") pod \"dnsmasq-dns-584dfd9675-8wzdw\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.895020 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-ovsdbserver-nb\") pod \"dnsmasq-dns-584dfd9675-8wzdw\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.895438 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-config\") pod \"dnsmasq-dns-584dfd9675-8wzdw\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.895806 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-ovsdbserver-sb\") pod \"dnsmasq-dns-584dfd9675-8wzdw\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.895899 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-dns-swift-storage-0\") pod \"dnsmasq-dns-584dfd9675-8wzdw\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.895905 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-dns-svc\") pod \"dnsmasq-dns-584dfd9675-8wzdw\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.913914 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rwh4\" (UniqueName: \"kubernetes.io/projected/955c4df0-924d-439d-8a58-66f49e93cf44-kube-api-access-5rwh4\") pod \"dnsmasq-dns-584dfd9675-8wzdw\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.973042 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9db17c7b-8322-4488-a47b-e68d66597d6d" (UID: "9db17c7b-8322-4488-a47b-e68d66597d6d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:03 crc kubenswrapper[4844]: I0126 13:20:03.996248 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.010112 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd10a394-bca1-4dd2-9441-2c9d4919f35e" (UID: "bd10a394-bca1-4dd2-9441-2c9d4919f35e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.022035 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-config" (OuterVolumeSpecName: "config") pod "bd10a394-bca1-4dd2-9441-2c9d4919f35e" (UID: "bd10a394-bca1-4dd2-9441-2c9d4919f35e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.042154 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-config" (OuterVolumeSpecName: "config") pod "9db17c7b-8322-4488-a47b-e68d66597d6d" (UID: "9db17c7b-8322-4488-a47b-e68d66597d6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.058049 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd10a394-bca1-4dd2-9441-2c9d4919f35e" (UID: "bd10a394-bca1-4dd2-9441-2c9d4919f35e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.085981 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9db17c7b-8322-4488-a47b-e68d66597d6d" (UID: "9db17c7b-8322-4488-a47b-e68d66597d6d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.088045 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9db17c7b-8322-4488-a47b-e68d66597d6d" (UID: "9db17c7b-8322-4488-a47b-e68d66597d6d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.093344 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bd10a394-bca1-4dd2-9441-2c9d4919f35e" (UID: "bd10a394-bca1-4dd2-9441-2c9d4919f35e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.095101 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd10a394-bca1-4dd2-9441-2c9d4919f35e" (UID: "bd10a394-bca1-4dd2-9441-2c9d4919f35e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.097588 4844 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.097707 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.097769 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.097830 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.097895 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.097959 4844 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.098020 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd10a394-bca1-4dd2-9441-2c9d4919f35e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.098076 4844 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9db17c7b-8322-4488-a47b-e68d66597d6d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.274869 4844 scope.go:117] "RemoveContainer" containerID="84aaf94f056025f15b8f7c6e6a2f64455652755ede0f8f95ce1fed45f910534e" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.374758 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.429065 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf"] Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.448338 4844 scope.go:117] "RemoveContainer" containerID="c56c449ad3f94eff685870cecbbd7272d7014bab109f6ec20ff7461beea95137" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.462790 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c5dd4c5cf-fhfbf"] Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.518643 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58cb66d699-gww5n"] Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.543069 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58cb66d699-gww5n"] Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.681491 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.683272 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.686232 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.686483 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5tdcs" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.686832 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.715880 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.730763 4844 generic.go:334] "Generic (PLEG): container finished" podID="374b2eea-9304-4d29-8cbf-7c0702f2fce8" containerID="fe3d548c6c9e7fb8bffe20d595bf3ae040cc732baba2c4c10f51615d05bc21e0" exitCode=0 Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.730828 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" event={"ID":"374b2eea-9304-4d29-8cbf-7c0702f2fce8","Type":"ContainerDied","Data":"fe3d548c6c9e7fb8bffe20d595bf3ae040cc732baba2c4c10f51615d05bc21e0"} Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.758001 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb4bbcbbd-hnxlf" event={"ID":"013c2624-05ec-49ef-85e2-5f5e155ee687","Type":"ContainerStarted","Data":"1fe654e73b7eef70afb3917f09a7f33adb2e4eb9d2acce93cdafb3fd839abab1"} Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.792213 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"388147f6-5b13-4111-9d1f-fe317038852d","Type":"ContainerStarted","Data":"42495dda29d29e62f3e3e9573d76c490c019e49f761a8cb521a79411ec5a1ac3"} Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.815406 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d2ba6a95-767f-4589-8dc9-e124e9be4fb4","Type":"ContainerStarted","Data":"3f9a2b8bf982ab015fb60f7c7f785bf62b1cca0b666990a9b68377f548735595"} Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.817927 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.818030 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.818130 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-config-data\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.818195 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-scripts\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.818260 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-logs\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.818319 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.818354 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7njd\" (UniqueName: \"kubernetes.io/projected/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-kube-api-access-g7njd\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.841946 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ed782618-8b69-4456-9aec-5184e765960f","Type":"ContainerStarted","Data":"f40661e9cae1344ff8df85b9eb11c5a53401a5c8932da25e88f55fc3d9a6f8f8"} Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.876038 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.877687 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.888327 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.917797 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.920025 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.920068 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7njd\" (UniqueName: \"kubernetes.io/projected/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-kube-api-access-g7njd\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.920129 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.920159 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.920217 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-config-data\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.920254 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-scripts\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.920275 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-logs\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.920657 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-logs\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.920927 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.927842 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.943284 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-config-data\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.966368 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.999385 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7njd\" (UniqueName: \"kubernetes.io/projected/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-kube-api-access-g7njd\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:04 crc kubenswrapper[4844]: I0126 13:20:04.999819 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-scripts\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.024325 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7f7fa83-d343-489e-9380-008d02156140-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.024365 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f7fa83-d343-489e-9380-008d02156140-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.024400 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f7fa83-d343-489e-9380-008d02156140-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.024426 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7f7fa83-d343-489e-9380-008d02156140-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.024479 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pckn\" (UniqueName: \"kubernetes.io/projected/c7f7fa83-d343-489e-9380-008d02156140-kube-api-access-9pckn\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.024512 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.024536 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7f7fa83-d343-489e-9380-008d02156140-logs\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.029131 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.129053 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.129754 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f7fa83-d343-489e-9380-008d02156140-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.129805 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f7fa83-d343-489e-9380-008d02156140-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.129835 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7f7fa83-d343-489e-9380-008d02156140-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.129884 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pckn\" (UniqueName: \"kubernetes.io/projected/c7f7fa83-d343-489e-9380-008d02156140-kube-api-access-9pckn\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.129908 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.129926 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7f7fa83-d343-489e-9380-008d02156140-logs\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.130030 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7f7fa83-d343-489e-9380-008d02156140-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.130456 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7f7fa83-d343-489e-9380-008d02156140-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.136747 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f7fa83-d343-489e-9380-008d02156140-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.137136 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.141408 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f7fa83-d343-489e-9380-008d02156140-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.141694 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7f7fa83-d343-489e-9380-008d02156140-logs\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.141743 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7f7fa83-d343-489e-9380-008d02156140-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.184198 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pckn\" (UniqueName: \"kubernetes.io/projected/c7f7fa83-d343-489e-9380-008d02156140-kube-api-access-9pckn\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.188234 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.244091 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.345331 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.347777 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9db17c7b-8322-4488-a47b-e68d66597d6d" path="/var/lib/kubelet/pods/9db17c7b-8322-4488-a47b-e68d66597d6d/volumes" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.348279 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd10a394-bca1-4dd2-9441-2c9d4919f35e" path="/var/lib/kubelet/pods/bd10a394-bca1-4dd2-9441-2c9d4919f35e/volumes" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.448379 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6gq5\" (UniqueName: \"kubernetes.io/projected/374b2eea-9304-4d29-8cbf-7c0702f2fce8-kube-api-access-s6gq5\") pod \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.448751 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-ovsdbserver-sb\") pod \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.448857 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-config\") pod \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.448926 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-dns-swift-storage-0\") pod \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.449013 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-ovsdbserver-nb\") pod \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.449038 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-dns-svc\") pod \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\" (UID: \"374b2eea-9304-4d29-8cbf-7c0702f2fce8\") " Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.484790 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/374b2eea-9304-4d29-8cbf-7c0702f2fce8-kube-api-access-s6gq5" (OuterVolumeSpecName: "kube-api-access-s6gq5") pod "374b2eea-9304-4d29-8cbf-7c0702f2fce8" (UID: "374b2eea-9304-4d29-8cbf-7c0702f2fce8"). InnerVolumeSpecName "kube-api-access-s6gq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.511828 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-config" (OuterVolumeSpecName: "config") pod "374b2eea-9304-4d29-8cbf-7c0702f2fce8" (UID: "374b2eea-9304-4d29-8cbf-7c0702f2fce8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.537390 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "374b2eea-9304-4d29-8cbf-7c0702f2fce8" (UID: "374b2eea-9304-4d29-8cbf-7c0702f2fce8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.562027 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "374b2eea-9304-4d29-8cbf-7c0702f2fce8" (UID: "374b2eea-9304-4d29-8cbf-7c0702f2fce8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.565399 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6gq5\" (UniqueName: \"kubernetes.io/projected/374b2eea-9304-4d29-8cbf-7c0702f2fce8-kube-api-access-s6gq5\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.565430 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.565440 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.565448 4844 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.577217 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "374b2eea-9304-4d29-8cbf-7c0702f2fce8" (UID: "374b2eea-9304-4d29-8cbf-7c0702f2fce8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.598578 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "374b2eea-9304-4d29-8cbf-7c0702f2fce8" (UID: "374b2eea-9304-4d29-8cbf-7c0702f2fce8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.667031 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.667082 4844 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/374b2eea-9304-4d29-8cbf-7c0702f2fce8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.669439 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-584dfd9675-8wzdw"] Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.889208 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nw6hp" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.889375 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nw6hp" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.926065 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-f984df9c6-m8lct" podUID="2f336c66-c9c1-4764-8f55-a6fd70f01790" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.985841 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b28528a5-6d16-4775-89eb-5f0e00b4afd1","Type":"ContainerStarted","Data":"b42db70088eb86e462bac2a7ee6c35302dc3fe653ff83e9ba986edda37f0ff80"} Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.989201 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d2ba6a95-767f-4589-8dc9-e124e9be4fb4","Type":"ContainerStarted","Data":"e403c6da4f46560f63044b5094a09e99cdbaff09ff677f8628b111b283d9b670"} Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.989318 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d2ba6a95-767f-4589-8dc9-e124e9be4fb4" containerName="cinder-api-log" containerID="cri-o://3f9a2b8bf982ab015fb60f7c7f785bf62b1cca0b666990a9b68377f548735595" gracePeriod=30 Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.989383 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 13:20:05 crc kubenswrapper[4844]: I0126 13:20:05.989695 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d2ba6a95-767f-4589-8dc9-e124e9be4fb4" containerName="cinder-api" containerID="cri-o://e403c6da4f46560f63044b5094a09e99cdbaff09ff677f8628b111b283d9b670" gracePeriod=30 Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.012971 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" event={"ID":"374b2eea-9304-4d29-8cbf-7c0702f2fce8","Type":"ContainerDied","Data":"d96399e6ef833aeb67c3ca66dd809ab4b292b4a174fc235b1f7672581897845a"} Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.013022 4844 scope.go:117] "RemoveContainer" containerID="fe3d548c6c9e7fb8bffe20d595bf3ae040cc732baba2c4c10f51615d05bc21e0" Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.013132 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9b459575-mvgv5" Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.115815 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" event={"ID":"955c4df0-924d-439d-8a58-66f49e93cf44","Type":"ContainerStarted","Data":"369c0b626029733f327784afd8bffda814bd8530300a88f8529930dc66370c5e"} Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.141232 4844 generic.go:334] "Generic (PLEG): container finished" podID="2f336c66-c9c1-4764-8f55-a6fd70f01790" containerID="c6ebce027282a49648d65f221d8df430e516930ebe722a6821d99749d3838a00" exitCode=0 Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.141349 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f984df9c6-m8lct" event={"ID":"2f336c66-c9c1-4764-8f55-a6fd70f01790","Type":"ContainerDied","Data":"c6ebce027282a49648d65f221d8df430e516930ebe722a6821d99749d3838a00"} Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.177328 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb4bbcbbd-hnxlf" event={"ID":"013c2624-05ec-49ef-85e2-5f5e155ee687","Type":"ContainerStarted","Data":"a52d32b289a4c0e01a4126b1de9ef0e5e6fb384989b7d29538c65dbf1bb6f966"} Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.182755 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.190369 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.190347625 podStartE2EDuration="7.190347625s" podCreationTimestamp="2026-01-26 13:19:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:20:06.078141261 +0000 UTC m=+2183.011508873" watchObservedRunningTime="2026-01-26 13:20:06.190347625 +0000 UTC m=+2183.123715237" Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.196243 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"388147f6-5b13-4111-9d1f-fe317038852d","Type":"ContainerStarted","Data":"622c5f1bda16149b59c0bc280898bd71a037aa92dba8fea1e55f57776f6eaa73"} Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.243860 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.294013 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b9b459575-mvgv5"] Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.322790 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b9b459575-mvgv5"] Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.334775 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-bb4bbcbbd-hnxlf" podStartSLOduration=6.334756646 podStartE2EDuration="6.334756646s" podCreationTimestamp="2026-01-26 13:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:20:06.309815163 +0000 UTC m=+2183.243182775" watchObservedRunningTime="2026-01-26 13:20:06.334756646 +0000 UTC m=+2183.268124258" Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.371130 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.371177 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.381841 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.451747 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6666d497b6-ksrz2" podUID="d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 13:20:06 crc kubenswrapper[4844]: E0126 13:20:06.689152 4844 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef403703_395e_4db1_a9f5_a8e011e39ff2.slice\": RecentStats: unable to find data in memory cache]" Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.794006 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.961676 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5fcff84d65-flkjh"] Jan 26 13:20:06 crc kubenswrapper[4844]: E0126 13:20:06.962305 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="374b2eea-9304-4d29-8cbf-7c0702f2fce8" containerName="init" Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.962318 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="374b2eea-9304-4d29-8cbf-7c0702f2fce8" containerName="init" Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.962515 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="374b2eea-9304-4d29-8cbf-7c0702f2fce8" containerName="init" Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.963486 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.966928 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.967497 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 26 13:20:06 crc kubenswrapper[4844]: I0126 13:20:06.972847 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fcff84d65-flkjh"] Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.040084 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-internal-tls-certs\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.040146 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh6zf\" (UniqueName: \"kubernetes.io/projected/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-kube-api-access-mh6zf\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.040171 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-config\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.040222 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-httpd-config\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.040241 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-combined-ca-bundle\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.040284 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-ovndb-tls-certs\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.040309 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-public-tls-certs\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.114118 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-nw6hp" podUID="0b53d3b2-56e9-427c-8dcd-e5487cecc4f9" containerName="registry-server" probeResult="failure" output=< Jan 26 13:20:07 crc kubenswrapper[4844]: timeout: failed to connect service ":50051" within 1s Jan 26 13:20:07 crc kubenswrapper[4844]: > Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.144230 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-internal-tls-certs\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.144320 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh6zf\" (UniqueName: \"kubernetes.io/projected/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-kube-api-access-mh6zf\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.144347 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-config\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.144406 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-httpd-config\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.144428 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-combined-ca-bundle\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.144472 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-ovndb-tls-certs\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.144502 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-public-tls-certs\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.170818 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-config\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.170835 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-httpd-config\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.171022 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-combined-ca-bundle\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.171383 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-ovndb-tls-certs\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.185710 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-public-tls-certs\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.185726 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh6zf\" (UniqueName: \"kubernetes.io/projected/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-kube-api-access-mh6zf\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.194909 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91acccd0-7b82-4ee7-afa7-549b7eeae8b6-internal-tls-certs\") pod \"neutron-5fcff84d65-flkjh\" (UID: \"91acccd0-7b82-4ee7-afa7-549b7eeae8b6\") " pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.244802 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c7f7fa83-d343-489e-9380-008d02156140","Type":"ContainerStarted","Data":"15a19b852561568c355efba6455c20572a80c3d0dbe0574e75d4d54c9ab11302"} Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.289760 4844 generic.go:334] "Generic (PLEG): container finished" podID="955c4df0-924d-439d-8a58-66f49e93cf44" containerID="e69fcb823d2f2ba4ebd708ec19d6a0178f2c712a5a302116f165693cdaf5ad60" exitCode=0 Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.289829 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" event={"ID":"955c4df0-924d-439d-8a58-66f49e93cf44","Type":"ContainerDied","Data":"e69fcb823d2f2ba4ebd708ec19d6a0178f2c712a5a302116f165693cdaf5ad60"} Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.314855 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4488efbb-d7e7-42cc-a9bc-18e471c5ac31","Type":"ContainerStarted","Data":"11763097bda3d370b65a6c3c63378e6c03d2fbed299e052d42fdd471fd3506d5"} Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.349809 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="374b2eea-9304-4d29-8cbf-7c0702f2fce8" path="/var/lib/kubelet/pods/374b2eea-9304-4d29-8cbf-7c0702f2fce8/volumes" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.357661 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"388147f6-5b13-4111-9d1f-fe317038852d","Type":"ContainerStarted","Data":"c63f9648a87ce352c12d0c8a5c8ab3586be5e8ccaa9d12b3be1eb58e72199be6"} Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.367053 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.377876 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b28528a5-6d16-4775-89eb-5f0e00b4afd1","Type":"ContainerStarted","Data":"0f9544d3a9e10d95d303c6ebb7e711a8e223f4070572b994ed7bda09d182e8d3"} Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.392255 4844 generic.go:334] "Generic (PLEG): container finished" podID="d2ba6a95-767f-4589-8dc9-e124e9be4fb4" containerID="3f9a2b8bf982ab015fb60f7c7f785bf62b1cca0b666990a9b68377f548735595" exitCode=143 Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.392325 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d2ba6a95-767f-4589-8dc9-e124e9be4fb4","Type":"ContainerDied","Data":"3f9a2b8bf982ab015fb60f7c7f785bf62b1cca0b666990a9b68377f548735595"} Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.411946 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.192251859 podStartE2EDuration="8.411931027s" podCreationTimestamp="2026-01-26 13:19:59 +0000 UTC" firstStartedPulling="2026-01-26 13:20:00.906013791 +0000 UTC m=+2177.839381403" lastFinishedPulling="2026-01-26 13:20:02.125692959 +0000 UTC m=+2179.059060571" observedRunningTime="2026-01-26 13:20:07.407623853 +0000 UTC m=+2184.340991465" watchObservedRunningTime="2026-01-26 13:20:07.411931027 +0000 UTC m=+2184.345298639" Jan 26 13:20:07 crc kubenswrapper[4844]: I0126 13:20:07.827947 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:20:08 crc kubenswrapper[4844]: I0126 13:20:08.299116 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fcff84d65-flkjh"] Jan 26 13:20:08 crc kubenswrapper[4844]: I0126 13:20:08.416367 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c7f7fa83-d343-489e-9380-008d02156140","Type":"ContainerStarted","Data":"dde992974cdf9dc15a033399ff094565c75d2691df5eb070d76b3900336dc959"} Jan 26 13:20:08 crc kubenswrapper[4844]: I0126 13:20:08.421304 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4488efbb-d7e7-42cc-a9bc-18e471c5ac31","Type":"ContainerStarted","Data":"6a33e249b4c0b9ea3a322754dfcd3feccdafcbdd0993b8dcc626f0998f566610"} Jan 26 13:20:08 crc kubenswrapper[4844]: I0126 13:20:08.425122 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fcff84d65-flkjh" event={"ID":"91acccd0-7b82-4ee7-afa7-549b7eeae8b6","Type":"ContainerStarted","Data":"b8b827285a1d2addfb9c5db301e573a4217e45beca3cee600ff0b5a5c4692a0e"} Jan 26 13:20:08 crc kubenswrapper[4844]: I0126 13:20:08.744927 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58b8c47bc6-5s5z9" Jan 26 13:20:08 crc kubenswrapper[4844]: I0126 13:20:08.843199 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6666d497b6-ksrz2"] Jan 26 13:20:08 crc kubenswrapper[4844]: I0126 13:20:08.843572 4844 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 13:20:08 crc kubenswrapper[4844]: I0126 13:20:08.843807 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6666d497b6-ksrz2" podUID="d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" containerName="barbican-api-log" containerID="cri-o://459349c5b6003c5194b259d9dfe845ec6cbdb10a3b68864239804bd4ba2b2223" gracePeriod=30 Jan 26 13:20:08 crc kubenswrapper[4844]: I0126 13:20:08.844266 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6666d497b6-ksrz2" podUID="d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" containerName="barbican-api" containerID="cri-o://97cf56503516e458be0772937f512301e87641d6a056953eb68a1e6f1d435a5f" gracePeriod=30 Jan 26 13:20:08 crc kubenswrapper[4844]: I0126 13:20:08.900733 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-58b8c47bc6-5s5z9" podUID="7f2cf574-1917-4f2b-adba-02bcf6cb4dc8" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.172:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 13:20:09 crc kubenswrapper[4844]: I0126 13:20:09.044391 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 26 13:20:09 crc kubenswrapper[4844]: I0126 13:20:09.099932 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 26 13:20:09 crc kubenswrapper[4844]: I0126 13:20:09.192767 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6666d497b6-ksrz2" podUID="d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": EOF" Jan 26 13:20:09 crc kubenswrapper[4844]: I0126 13:20:09.195755 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6666d497b6-ksrz2" podUID="d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": EOF" Jan 26 13:20:09 crc kubenswrapper[4844]: I0126 13:20:09.196281 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6666d497b6-ksrz2" podUID="d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": EOF" Jan 26 13:20:09 crc kubenswrapper[4844]: I0126 13:20:09.541356 4844 generic.go:334] "Generic (PLEG): container finished" podID="d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" containerID="459349c5b6003c5194b259d9dfe845ec6cbdb10a3b68864239804bd4ba2b2223" exitCode=143 Jan 26 13:20:09 crc kubenswrapper[4844]: I0126 13:20:09.541701 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6666d497b6-ksrz2" event={"ID":"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5","Type":"ContainerDied","Data":"459349c5b6003c5194b259d9dfe845ec6cbdb10a3b68864239804bd4ba2b2223"} Jan 26 13:20:09 crc kubenswrapper[4844]: I0126 13:20:09.596936 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:20:09 crc kubenswrapper[4844]: I0126 13:20:09.691624 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:20:09 crc kubenswrapper[4844]: I0126 13:20:09.714151 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 13:20:09 crc kubenswrapper[4844]: I0126 13:20:09.716569 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="b28528a5-6d16-4775-89eb-5f0e00b4afd1" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.174:8080/\": dial tcp 10.217.0.174:8080: connect: connection refused" Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.588434 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c7f7fa83-d343-489e-9380-008d02156140","Type":"ContainerStarted","Data":"7a6ae3e355074b935f547f5066d4f5d3985800acd5495a89d0ebcbfeef7bf21d"} Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.588768 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c7f7fa83-d343-489e-9380-008d02156140" containerName="glance-log" containerID="cri-o://dde992974cdf9dc15a033399ff094565c75d2691df5eb070d76b3900336dc959" gracePeriod=30 Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.589088 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c7f7fa83-d343-489e-9380-008d02156140" containerName="glance-httpd" containerID="cri-o://7a6ae3e355074b935f547f5066d4f5d3985800acd5495a89d0ebcbfeef7bf21d" gracePeriod=30 Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.591568 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" event={"ID":"955c4df0-924d-439d-8a58-66f49e93cf44","Type":"ContainerStarted","Data":"dd4ef9896a032c4f099137976f07aecb620fb6a4975a0ab3dfd0a22073c86bdc"} Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.591897 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.593610 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4488efbb-d7e7-42cc-a9bc-18e471c5ac31","Type":"ContainerStarted","Data":"320ad7eaa6737ed1c9e54108dc927e17c4b2afbc8ffdeb0c1b99f295b0c8c665"} Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.593841 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4488efbb-d7e7-42cc-a9bc-18e471c5ac31" containerName="glance-log" containerID="cri-o://6a33e249b4c0b9ea3a322754dfcd3feccdafcbdd0993b8dcc626f0998f566610" gracePeriod=30 Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.594007 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4488efbb-d7e7-42cc-a9bc-18e471c5ac31" containerName="glance-httpd" containerID="cri-o://320ad7eaa6737ed1c9e54108dc927e17c4b2afbc8ffdeb0c1b99f295b0c8c665" gracePeriod=30 Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.601059 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"388147f6-5b13-4111-9d1f-fe317038852d","Type":"ContainerStarted","Data":"652384a1a113107dec6c823ed50bdaaad3c621f614e3593b9879c6365df3e8c0"} Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.601346 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.604849 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fcff84d65-flkjh" event={"ID":"91acccd0-7b82-4ee7-afa7-549b7eeae8b6","Type":"ContainerStarted","Data":"3768c70b04d3436f9753c2924da84f79bdf513600f0389379df10d2be53f624d"} Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.604991 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fcff84d65-flkjh" event={"ID":"91acccd0-7b82-4ee7-afa7-549b7eeae8b6","Type":"ContainerStarted","Data":"855b97178d318b53a4af0f12a32d81fd645d79c57dcdce6a492f562b78fef3ea"} Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.605111 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.607460 4844 generic.go:334] "Generic (PLEG): container finished" podID="ed782618-8b69-4456-9aec-5184e765960f" containerID="f40661e9cae1344ff8df85b9eb11c5a53401a5c8932da25e88f55fc3d9a6f8f8" exitCode=1 Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.607651 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ed782618-8b69-4456-9aec-5184e765960f","Type":"ContainerDied","Data":"f40661e9cae1344ff8df85b9eb11c5a53401a5c8932da25e88f55fc3d9a6f8f8"} Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.607836 4844 scope.go:117] "RemoveContainer" containerID="920a38a2c1e0977cbdcbd5e4c3757be17293c805c1c55b4e7ee718455c1317a2" Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.608418 4844 scope.go:117] "RemoveContainer" containerID="f40661e9cae1344ff8df85b9eb11c5a53401a5c8932da25e88f55fc3d9a6f8f8" Jan 26 13:20:10 crc kubenswrapper[4844]: E0126 13:20:10.608895 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ed782618-8b69-4456-9aec-5184e765960f)\"" pod="openstack/watcher-decision-engine-0" podUID="ed782618-8b69-4456-9aec-5184e765960f" Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.613203 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.613190149 podStartE2EDuration="7.613190149s" podCreationTimestamp="2026-01-26 13:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:20:10.607146824 +0000 UTC m=+2187.540514436" watchObservedRunningTime="2026-01-26 13:20:10.613190149 +0000 UTC m=+2187.546557761" Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.631337 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5fcff84d65-flkjh" podStartSLOduration=4.631321368 podStartE2EDuration="4.631321368s" podCreationTimestamp="2026-01-26 13:20:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:20:10.629083574 +0000 UTC m=+2187.562451176" watchObservedRunningTime="2026-01-26 13:20:10.631321368 +0000 UTC m=+2187.564688980" Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.660359 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" podStartSLOduration=7.660335929 podStartE2EDuration="7.660335929s" podCreationTimestamp="2026-01-26 13:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:20:10.646789252 +0000 UTC m=+2187.580156874" watchObservedRunningTime="2026-01-26 13:20:10.660335929 +0000 UTC m=+2187.593703541" Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.675549 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.675529347 podStartE2EDuration="7.675529347s" podCreationTimestamp="2026-01-26 13:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:20:10.66531828 +0000 UTC m=+2187.598685882" watchObservedRunningTime="2026-01-26 13:20:10.675529347 +0000 UTC m=+2187.608896959" Jan 26 13:20:10 crc kubenswrapper[4844]: I0126 13:20:10.701377 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.680705046 podStartE2EDuration="11.701360021s" podCreationTimestamp="2026-01-26 13:19:59 +0000 UTC" firstStartedPulling="2026-01-26 13:20:01.179677838 +0000 UTC m=+2178.113045450" lastFinishedPulling="2026-01-26 13:20:09.200332813 +0000 UTC m=+2186.133700425" observedRunningTime="2026-01-26 13:20:10.693007279 +0000 UTC m=+2187.626374891" watchObservedRunningTime="2026-01-26 13:20:10.701360021 +0000 UTC m=+2187.634727633" Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.650808 4844 generic.go:334] "Generic (PLEG): container finished" podID="4488efbb-d7e7-42cc-a9bc-18e471c5ac31" containerID="320ad7eaa6737ed1c9e54108dc927e17c4b2afbc8ffdeb0c1b99f295b0c8c665" exitCode=0 Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.651030 4844 generic.go:334] "Generic (PLEG): container finished" podID="4488efbb-d7e7-42cc-a9bc-18e471c5ac31" containerID="6a33e249b4c0b9ea3a322754dfcd3feccdafcbdd0993b8dcc626f0998f566610" exitCode=143 Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.651083 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4488efbb-d7e7-42cc-a9bc-18e471c5ac31","Type":"ContainerDied","Data":"320ad7eaa6737ed1c9e54108dc927e17c4b2afbc8ffdeb0c1b99f295b0c8c665"} Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.651108 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4488efbb-d7e7-42cc-a9bc-18e471c5ac31","Type":"ContainerDied","Data":"6a33e249b4c0b9ea3a322754dfcd3feccdafcbdd0993b8dcc626f0998f566610"} Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.706782 4844 generic.go:334] "Generic (PLEG): container finished" podID="c7f7fa83-d343-489e-9380-008d02156140" containerID="7a6ae3e355074b935f547f5066d4f5d3985800acd5495a89d0ebcbfeef7bf21d" exitCode=0 Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.706817 4844 generic.go:334] "Generic (PLEG): container finished" podID="c7f7fa83-d343-489e-9380-008d02156140" containerID="dde992974cdf9dc15a033399ff094565c75d2691df5eb070d76b3900336dc959" exitCode=143 Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.707751 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c7f7fa83-d343-489e-9380-008d02156140","Type":"ContainerDied","Data":"7a6ae3e355074b935f547f5066d4f5d3985800acd5495a89d0ebcbfeef7bf21d"} Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.707783 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c7f7fa83-d343-489e-9380-008d02156140","Type":"ContainerDied","Data":"dde992974cdf9dc15a033399ff094565c75d2691df5eb070d76b3900336dc959"} Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.905456 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.962863 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"c7f7fa83-d343-489e-9380-008d02156140\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.963196 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7f7fa83-d343-489e-9380-008d02156140-httpd-run\") pod \"c7f7fa83-d343-489e-9380-008d02156140\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.963251 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7f7fa83-d343-489e-9380-008d02156140-logs\") pod \"c7f7fa83-d343-489e-9380-008d02156140\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.963302 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7f7fa83-d343-489e-9380-008d02156140-scripts\") pod \"c7f7fa83-d343-489e-9380-008d02156140\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.963434 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pckn\" (UniqueName: \"kubernetes.io/projected/c7f7fa83-d343-489e-9380-008d02156140-kube-api-access-9pckn\") pod \"c7f7fa83-d343-489e-9380-008d02156140\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.963514 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f7fa83-d343-489e-9380-008d02156140-combined-ca-bundle\") pod \"c7f7fa83-d343-489e-9380-008d02156140\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.963548 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f7fa83-d343-489e-9380-008d02156140-config-data\") pod \"c7f7fa83-d343-489e-9380-008d02156140\" (UID: \"c7f7fa83-d343-489e-9380-008d02156140\") " Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.965205 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7f7fa83-d343-489e-9380-008d02156140-logs" (OuterVolumeSpecName: "logs") pod "c7f7fa83-d343-489e-9380-008d02156140" (UID: "c7f7fa83-d343-489e-9380-008d02156140"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.973316 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f7fa83-d343-489e-9380-008d02156140-kube-api-access-9pckn" (OuterVolumeSpecName: "kube-api-access-9pckn") pod "c7f7fa83-d343-489e-9380-008d02156140" (UID: "c7f7fa83-d343-489e-9380-008d02156140"). InnerVolumeSpecName "kube-api-access-9pckn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.973718 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7f7fa83-d343-489e-9380-008d02156140-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c7f7fa83-d343-489e-9380-008d02156140" (UID: "c7f7fa83-d343-489e-9380-008d02156140"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.977850 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f7fa83-d343-489e-9380-008d02156140-scripts" (OuterVolumeSpecName: "scripts") pod "c7f7fa83-d343-489e-9380-008d02156140" (UID: "c7f7fa83-d343-489e-9380-008d02156140"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:11 crc kubenswrapper[4844]: I0126 13:20:11.981417 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "c7f7fa83-d343-489e-9380-008d02156140" (UID: "c7f7fa83-d343-489e-9380-008d02156140"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.014740 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f7fa83-d343-489e-9380-008d02156140-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7f7fa83-d343-489e-9380-008d02156140" (UID: "c7f7fa83-d343-489e-9380-008d02156140"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.016685 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.016728 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.017350 4844 scope.go:117] "RemoveContainer" containerID="f40661e9cae1344ff8df85b9eb11c5a53401a5c8932da25e88f55fc3d9a6f8f8" Jan 26 13:20:12 crc kubenswrapper[4844]: E0126 13:20:12.017671 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ed782618-8b69-4456-9aec-5184e765960f)\"" pod="openstack/watcher-decision-engine-0" podUID="ed782618-8b69-4456-9aec-5184e765960f" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.061327 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f7fa83-d343-489e-9380-008d02156140-config-data" (OuterVolumeSpecName: "config-data") pod "c7f7fa83-d343-489e-9380-008d02156140" (UID: "c7f7fa83-d343-489e-9380-008d02156140"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.065899 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7f7fa83-d343-489e-9380-008d02156140-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.065922 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7f7fa83-d343-489e-9380-008d02156140-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.065931 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pckn\" (UniqueName: \"kubernetes.io/projected/c7f7fa83-d343-489e-9380-008d02156140-kube-api-access-9pckn\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.065943 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f7fa83-d343-489e-9380-008d02156140-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.065952 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f7fa83-d343-489e-9380-008d02156140-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.065971 4844 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.065980 4844 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7f7fa83-d343-489e-9380-008d02156140-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.106399 4844 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.112117 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.148188 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6666d497b6-ksrz2" podUID="d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": read tcp 10.217.0.2:54728->10.217.0.171:9311: read: connection reset by peer" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.148249 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6666d497b6-ksrz2" podUID="d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": read tcp 10.217.0.2:54736->10.217.0.171:9311: read: connection reset by peer" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.148727 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6666d497b6-ksrz2" podUID="d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": dial tcp 10.217.0.171:9311: connect: connection refused" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.169511 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-logs\") pod \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.169552 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.169672 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-config-data\") pod \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.169745 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-combined-ca-bundle\") pod \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.169838 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-scripts\") pod \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.169875 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-httpd-run\") pod \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.169905 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7njd\" (UniqueName: \"kubernetes.io/projected/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-kube-api-access-g7njd\") pod \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\" (UID: \"4488efbb-d7e7-42cc-a9bc-18e471c5ac31\") " Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.170517 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4488efbb-d7e7-42cc-a9bc-18e471c5ac31" (UID: "4488efbb-d7e7-42cc-a9bc-18e471c5ac31"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.170703 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-logs" (OuterVolumeSpecName: "logs") pod "4488efbb-d7e7-42cc-a9bc-18e471c5ac31" (UID: "4488efbb-d7e7-42cc-a9bc-18e471c5ac31"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.173014 4844 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.173076 4844 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.174286 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.184861 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-scripts" (OuterVolumeSpecName: "scripts") pod "4488efbb-d7e7-42cc-a9bc-18e471c5ac31" (UID: "4488efbb-d7e7-42cc-a9bc-18e471c5ac31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.187342 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "4488efbb-d7e7-42cc-a9bc-18e471c5ac31" (UID: "4488efbb-d7e7-42cc-a9bc-18e471c5ac31"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.188896 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-kube-api-access-g7njd" (OuterVolumeSpecName: "kube-api-access-g7njd") pod "4488efbb-d7e7-42cc-a9bc-18e471c5ac31" (UID: "4488efbb-d7e7-42cc-a9bc-18e471c5ac31"). InnerVolumeSpecName "kube-api-access-g7njd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.217693 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4488efbb-d7e7-42cc-a9bc-18e471c5ac31" (UID: "4488efbb-d7e7-42cc-a9bc-18e471c5ac31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.232193 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-config-data" (OuterVolumeSpecName: "config-data") pod "4488efbb-d7e7-42cc-a9bc-18e471c5ac31" (UID: "4488efbb-d7e7-42cc-a9bc-18e471c5ac31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.275871 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.275906 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.275919 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.275931 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7njd\" (UniqueName: \"kubernetes.io/projected/4488efbb-d7e7-42cc-a9bc-18e471c5ac31-kube-api-access-g7njd\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.275964 4844 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.297045 4844 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.378735 4844 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.509716 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.581393 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-config-data-custom\") pod \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.581460 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-logs\") pod \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.581582 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-config-data\") pod \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.581622 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-combined-ca-bundle\") pod \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.581700 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qc4v\" (UniqueName: \"kubernetes.io/projected/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-kube-api-access-2qc4v\") pod \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\" (UID: \"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5\") " Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.582380 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-logs" (OuterVolumeSpecName: "logs") pod "d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" (UID: "d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.585991 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" (UID: "d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.592793 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-kube-api-access-2qc4v" (OuterVolumeSpecName: "kube-api-access-2qc4v") pod "d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" (UID: "d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5"). InnerVolumeSpecName "kube-api-access-2qc4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.612763 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" (UID: "d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.630622 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-config-data" (OuterVolumeSpecName: "config-data") pod "d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" (UID: "d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.684008 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.684049 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.684060 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qc4v\" (UniqueName: \"kubernetes.io/projected/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-kube-api-access-2qc4v\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.684070 4844 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.684079 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.716131 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c7f7fa83-d343-489e-9380-008d02156140","Type":"ContainerDied","Data":"15a19b852561568c355efba6455c20572a80c3d0dbe0574e75d4d54c9ab11302"} Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.716203 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.716215 4844 scope.go:117] "RemoveContainer" containerID="7a6ae3e355074b935f547f5066d4f5d3985800acd5495a89d0ebcbfeef7bf21d" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.718142 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4488efbb-d7e7-42cc-a9bc-18e471c5ac31","Type":"ContainerDied","Data":"11763097bda3d370b65a6c3c63378e6c03d2fbed299e052d42fdd471fd3506d5"} Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.718302 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.720308 4844 generic.go:334] "Generic (PLEG): container finished" podID="d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" containerID="97cf56503516e458be0772937f512301e87641d6a056953eb68a1e6f1d435a5f" exitCode=0 Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.720336 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6666d497b6-ksrz2" event={"ID":"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5","Type":"ContainerDied","Data":"97cf56503516e458be0772937f512301e87641d6a056953eb68a1e6f1d435a5f"} Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.720351 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6666d497b6-ksrz2" event={"ID":"d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5","Type":"ContainerDied","Data":"4bb4ebfbd66bf4dd4c9673aaeb869174f01842e37953ce2438e54959278afe70"} Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.720390 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6666d497b6-ksrz2" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.747115 4844 scope.go:117] "RemoveContainer" containerID="dde992974cdf9dc15a033399ff094565c75d2691df5eb070d76b3900336dc959" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.756886 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.773694 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.792133 4844 scope.go:117] "RemoveContainer" containerID="320ad7eaa6737ed1c9e54108dc927e17c4b2afbc8ffdeb0c1b99f295b0c8c665" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.795459 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6666d497b6-ksrz2"] Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.811476 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6666d497b6-ksrz2"] Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.840718 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:20:12 crc kubenswrapper[4844]: E0126 13:20:12.841153 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" containerName="barbican-api" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.841165 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" containerName="barbican-api" Jan 26 13:20:12 crc kubenswrapper[4844]: E0126 13:20:12.841191 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f7fa83-d343-489e-9380-008d02156140" containerName="glance-httpd" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.841197 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f7fa83-d343-489e-9380-008d02156140" containerName="glance-httpd" Jan 26 13:20:12 crc kubenswrapper[4844]: E0126 13:20:12.841211 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" containerName="barbican-api-log" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.841217 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" containerName="barbican-api-log" Jan 26 13:20:12 crc kubenswrapper[4844]: E0126 13:20:12.841227 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4488efbb-d7e7-42cc-a9bc-18e471c5ac31" containerName="glance-log" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.841234 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="4488efbb-d7e7-42cc-a9bc-18e471c5ac31" containerName="glance-log" Jan 26 13:20:12 crc kubenswrapper[4844]: E0126 13:20:12.841247 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4488efbb-d7e7-42cc-a9bc-18e471c5ac31" containerName="glance-httpd" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.841252 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="4488efbb-d7e7-42cc-a9bc-18e471c5ac31" containerName="glance-httpd" Jan 26 13:20:12 crc kubenswrapper[4844]: E0126 13:20:12.841261 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f7fa83-d343-489e-9380-008d02156140" containerName="glance-log" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.841266 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f7fa83-d343-489e-9380-008d02156140" containerName="glance-log" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.841426 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" containerName="barbican-api" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.841454 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="4488efbb-d7e7-42cc-a9bc-18e471c5ac31" containerName="glance-log" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.841471 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f7fa83-d343-489e-9380-008d02156140" containerName="glance-httpd" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.841481 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" containerName="barbican-api-log" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.841498 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f7fa83-d343-489e-9380-008d02156140" containerName="glance-log" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.841507 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="4488efbb-d7e7-42cc-a9bc-18e471c5ac31" containerName="glance-httpd" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.842512 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.856041 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.856194 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.856293 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5tdcs" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.856428 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.864662 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.894716 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.898206 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.898840 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8576337-1537-4b93-8d68-829d6bdb8a44-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.898890 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-config-data\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.898926 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8576337-1537-4b93-8d68-829d6bdb8a44-logs\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.898970 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5ftl\" (UniqueName: \"kubernetes.io/projected/f8576337-1537-4b93-8d68-829d6bdb8a44-kube-api-access-v5ftl\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.899002 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.899019 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.899049 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-scripts\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.906663 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.916378 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.926293 4844 scope.go:117] "RemoveContainer" containerID="6a33e249b4c0b9ea3a322754dfcd3feccdafcbdd0993b8dcc626f0998f566610" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.929210 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.932268 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.932776 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.952896 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:20:12 crc kubenswrapper[4844]: I0126 13:20:12.975178 4844 scope.go:117] "RemoveContainer" containerID="97cf56503516e458be0772937f512301e87641d6a056953eb68a1e6f1d435a5f" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.001325 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.001377 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.001412 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttt26\" (UniqueName: \"kubernetes.io/projected/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-kube-api-access-ttt26\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.001442 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.001486 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-logs\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.001515 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.001549 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8576337-1537-4b93-8d68-829d6bdb8a44-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.001800 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.001933 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-config-data\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.001992 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8576337-1537-4b93-8d68-829d6bdb8a44-logs\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.002020 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.002069 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5ftl\" (UniqueName: \"kubernetes.io/projected/f8576337-1537-4b93-8d68-829d6bdb8a44-kube-api-access-v5ftl\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.002101 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.002125 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.002154 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.002193 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-scripts\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.002223 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.002458 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8576337-1537-4b93-8d68-829d6bdb8a44-logs\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.003313 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8576337-1537-4b93-8d68-829d6bdb8a44-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.020631 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-config-data\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.022488 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.024286 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-scripts\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.032552 4844 scope.go:117] "RemoveContainer" containerID="459349c5b6003c5194b259d9dfe845ec6cbdb10a3b68864239804bd4ba2b2223" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.036293 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5ftl\" (UniqueName: \"kubernetes.io/projected/f8576337-1537-4b93-8d68-829d6bdb8a44-kube-api-access-v5ftl\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.047424 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.064125 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.103811 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.103904 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.104291 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.104304 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.104367 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.104394 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttt26\" (UniqueName: \"kubernetes.io/projected/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-kube-api-access-ttt26\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.104421 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.104476 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-logs\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.104510 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.105116 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.105559 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-logs\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.114458 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.114833 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.120138 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.120790 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.128907 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttt26\" (UniqueName: \"kubernetes.io/projected/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-kube-api-access-ttt26\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.139519 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.213351 4844 scope.go:117] "RemoveContainer" containerID="97cf56503516e458be0772937f512301e87641d6a056953eb68a1e6f1d435a5f" Jan 26 13:20:13 crc kubenswrapper[4844]: E0126 13:20:13.213846 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97cf56503516e458be0772937f512301e87641d6a056953eb68a1e6f1d435a5f\": container with ID starting with 97cf56503516e458be0772937f512301e87641d6a056953eb68a1e6f1d435a5f not found: ID does not exist" containerID="97cf56503516e458be0772937f512301e87641d6a056953eb68a1e6f1d435a5f" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.213907 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97cf56503516e458be0772937f512301e87641d6a056953eb68a1e6f1d435a5f"} err="failed to get container status \"97cf56503516e458be0772937f512301e87641d6a056953eb68a1e6f1d435a5f\": rpc error: code = NotFound desc = could not find container \"97cf56503516e458be0772937f512301e87641d6a056953eb68a1e6f1d435a5f\": container with ID starting with 97cf56503516e458be0772937f512301e87641d6a056953eb68a1e6f1d435a5f not found: ID does not exist" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.213942 4844 scope.go:117] "RemoveContainer" containerID="459349c5b6003c5194b259d9dfe845ec6cbdb10a3b68864239804bd4ba2b2223" Jan 26 13:20:13 crc kubenswrapper[4844]: E0126 13:20:13.214272 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"459349c5b6003c5194b259d9dfe845ec6cbdb10a3b68864239804bd4ba2b2223\": container with ID starting with 459349c5b6003c5194b259d9dfe845ec6cbdb10a3b68864239804bd4ba2b2223 not found: ID does not exist" containerID="459349c5b6003c5194b259d9dfe845ec6cbdb10a3b68864239804bd4ba2b2223" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.214301 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"459349c5b6003c5194b259d9dfe845ec6cbdb10a3b68864239804bd4ba2b2223"} err="failed to get container status \"459349c5b6003c5194b259d9dfe845ec6cbdb10a3b68864239804bd4ba2b2223\": rpc error: code = NotFound desc = could not find container \"459349c5b6003c5194b259d9dfe845ec6cbdb10a3b68864239804bd4ba2b2223\": container with ID starting with 459349c5b6003c5194b259d9dfe845ec6cbdb10a3b68864239804bd4ba2b2223 not found: ID does not exist" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.234778 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.304750 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.347194 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4488efbb-d7e7-42cc-a9bc-18e471c5ac31" path="/var/lib/kubelet/pods/4488efbb-d7e7-42cc-a9bc-18e471c5ac31/volumes" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.348250 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7f7fa83-d343-489e-9380-008d02156140" path="/var/lib/kubelet/pods/c7f7fa83-d343-489e-9380-008d02156140/volumes" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.349048 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5" path="/var/lib/kubelet/pods/d36d4c6a-dac1-4d35-bd0b-597c8e5ffaf5/volumes" Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.777704 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:20:13 crc kubenswrapper[4844]: W0126 13:20:13.785232 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8576337_1537_4b93_8d68_829d6bdb8a44.slice/crio-3f22adfdf18e56267abb2f14d250290a91d23159c76e670a12967cfa0fdb0496 WatchSource:0}: Error finding container 3f22adfdf18e56267abb2f14d250290a91d23159c76e670a12967cfa0fdb0496: Status 404 returned error can't find the container with id 3f22adfdf18e56267abb2f14d250290a91d23159c76e670a12967cfa0fdb0496 Jan 26 13:20:13 crc kubenswrapper[4844]: I0126 13:20:13.928297 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:20:13 crc kubenswrapper[4844]: W0126 13:20:13.928917 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48790dbd_c7a3_48f0_a3a8_a8685a07f9d2.slice/crio-e130fa15b1789560ff24f2e05e66fa5b4cb3716ad44fc8cf1aa9a22de574661a WatchSource:0}: Error finding container e130fa15b1789560ff24f2e05e66fa5b4cb3716ad44fc8cf1aa9a22de574661a: Status 404 returned error can't find the container with id e130fa15b1789560ff24f2e05e66fa5b4cb3716ad44fc8cf1aa9a22de574661a Jan 26 13:20:14 crc kubenswrapper[4844]: I0126 13:20:14.379786 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:20:14 crc kubenswrapper[4844]: I0126 13:20:14.458767 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7c497879-k82c9"] Jan 26 13:20:14 crc kubenswrapper[4844]: I0126 13:20:14.458997 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c7c497879-k82c9" podUID="188e9259-51a6-4775-a1a5-ccf2f736513c" containerName="dnsmasq-dns" containerID="cri-o://4d8eab6c984410c439f9b97b7a03a8145b09a746e9069a5e1302b5095013402c" gracePeriod=10 Jan 26 13:20:14 crc kubenswrapper[4844]: I0126 13:20:14.761239 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2","Type":"ContainerStarted","Data":"e130fa15b1789560ff24f2e05e66fa5b4cb3716ad44fc8cf1aa9a22de574661a"} Jan 26 13:20:14 crc kubenswrapper[4844]: I0126 13:20:14.781906 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f8576337-1537-4b93-8d68-829d6bdb8a44","Type":"ContainerStarted","Data":"f146d5e904901fde282f115e9db4329dac1a4682ce347aee3a96bc73d6a73f52"} Jan 26 13:20:14 crc kubenswrapper[4844]: I0126 13:20:14.781949 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f8576337-1537-4b93-8d68-829d6bdb8a44","Type":"ContainerStarted","Data":"3f22adfdf18e56267abb2f14d250290a91d23159c76e670a12967cfa0fdb0496"} Jan 26 13:20:14 crc kubenswrapper[4844]: I0126 13:20:14.791582 4844 generic.go:334] "Generic (PLEG): container finished" podID="188e9259-51a6-4775-a1a5-ccf2f736513c" containerID="4d8eab6c984410c439f9b97b7a03a8145b09a746e9069a5e1302b5095013402c" exitCode=0 Jan 26 13:20:14 crc kubenswrapper[4844]: I0126 13:20:14.791637 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7c497879-k82c9" event={"ID":"188e9259-51a6-4775-a1a5-ccf2f736513c","Type":"ContainerDied","Data":"4d8eab6c984410c439f9b97b7a03a8145b09a746e9069a5e1302b5095013402c"} Jan 26 13:20:14 crc kubenswrapper[4844]: I0126 13:20:14.983891 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.034610 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.143764 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.256098 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-659hs\" (UniqueName: \"kubernetes.io/projected/188e9259-51a6-4775-a1a5-ccf2f736513c-kube-api-access-659hs\") pod \"188e9259-51a6-4775-a1a5-ccf2f736513c\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.256454 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-ovsdbserver-nb\") pod \"188e9259-51a6-4775-a1a5-ccf2f736513c\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.256536 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-ovsdbserver-sb\") pod \"188e9259-51a6-4775-a1a5-ccf2f736513c\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.256712 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-dns-swift-storage-0\") pod \"188e9259-51a6-4775-a1a5-ccf2f736513c\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.256890 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-config\") pod \"188e9259-51a6-4775-a1a5-ccf2f736513c\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.256950 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-dns-svc\") pod \"188e9259-51a6-4775-a1a5-ccf2f736513c\" (UID: \"188e9259-51a6-4775-a1a5-ccf2f736513c\") " Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.261980 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/188e9259-51a6-4775-a1a5-ccf2f736513c-kube-api-access-659hs" (OuterVolumeSpecName: "kube-api-access-659hs") pod "188e9259-51a6-4775-a1a5-ccf2f736513c" (UID: "188e9259-51a6-4775-a1a5-ccf2f736513c"). InnerVolumeSpecName "kube-api-access-659hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.330318 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "188e9259-51a6-4775-a1a5-ccf2f736513c" (UID: "188e9259-51a6-4775-a1a5-ccf2f736513c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.343271 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-config" (OuterVolumeSpecName: "config") pod "188e9259-51a6-4775-a1a5-ccf2f736513c" (UID: "188e9259-51a6-4775-a1a5-ccf2f736513c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.346927 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "188e9259-51a6-4775-a1a5-ccf2f736513c" (UID: "188e9259-51a6-4775-a1a5-ccf2f736513c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.359244 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "188e9259-51a6-4775-a1a5-ccf2f736513c" (UID: "188e9259-51a6-4775-a1a5-ccf2f736513c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.359496 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.359534 4844 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.359543 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-659hs\" (UniqueName: \"kubernetes.io/projected/188e9259-51a6-4775-a1a5-ccf2f736513c-kube-api-access-659hs\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.359552 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.359563 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.412294 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "188e9259-51a6-4775-a1a5-ccf2f736513c" (UID: "188e9259-51a6-4775-a1a5-ccf2f736513c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.477240 4844 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/188e9259-51a6-4775-a1a5-ccf2f736513c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.804882 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f8576337-1537-4b93-8d68-829d6bdb8a44","Type":"ContainerStarted","Data":"76d186df983a4311c39bb131d05901edbae1caed5ec77d7ca63e96878e3cee73"} Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.808541 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7c497879-k82c9" event={"ID":"188e9259-51a6-4775-a1a5-ccf2f736513c","Type":"ContainerDied","Data":"c2e6ac6ed2a15df6482bed47e0194e18748e40725ed08e4d7662a28b16bcb4cb"} Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.808587 4844 scope.go:117] "RemoveContainer" containerID="4d8eab6c984410c439f9b97b7a03a8145b09a746e9069a5e1302b5095013402c" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.808793 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7c497879-k82c9" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.813816 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2","Type":"ContainerStarted","Data":"df354ad5061d63d79ae83713c4429531193c9b599281b212b18b5aa951055455"} Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.813907 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b28528a5-6d16-4775-89eb-5f0e00b4afd1" containerName="cinder-scheduler" containerID="cri-o://b42db70088eb86e462bac2a7ee6c35302dc3fe653ff83e9ba986edda37f0ff80" gracePeriod=30 Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.813915 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b28528a5-6d16-4775-89eb-5f0e00b4afd1" containerName="probe" containerID="cri-o://0f9544d3a9e10d95d303c6ebb7e711a8e223f4070572b994ed7bda09d182e8d3" gracePeriod=30 Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.857163 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.856940552 podStartE2EDuration="3.856940552s" podCreationTimestamp="2026-01-26 13:20:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:20:15.821030013 +0000 UTC m=+2192.754397625" watchObservedRunningTime="2026-01-26 13:20:15.856940552 +0000 UTC m=+2192.790308174" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.885200 4844 scope.go:117] "RemoveContainer" containerID="af21c2810e4044591f086410d0124cdae8e8a36091592c3abcf685476f14e128" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.936542 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-f984df9c6-m8lct" podUID="2f336c66-c9c1-4764-8f55-a6fd70f01790" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.939669 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7c497879-k82c9"] Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.952306 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7c497879-k82c9"] Jan 26 13:20:15 crc kubenswrapper[4844]: I0126 13:20:15.959899 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nw6hp" Jan 26 13:20:16 crc kubenswrapper[4844]: I0126 13:20:16.063907 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nw6hp" Jan 26 13:20:16 crc kubenswrapper[4844]: I0126 13:20:16.209289 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nw6hp"] Jan 26 13:20:16 crc kubenswrapper[4844]: I0126 13:20:16.880810 4844 generic.go:334] "Generic (PLEG): container finished" podID="b28528a5-6d16-4775-89eb-5f0e00b4afd1" containerID="0f9544d3a9e10d95d303c6ebb7e711a8e223f4070572b994ed7bda09d182e8d3" exitCode=0 Jan 26 13:20:16 crc kubenswrapper[4844]: I0126 13:20:16.881657 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b28528a5-6d16-4775-89eb-5f0e00b4afd1","Type":"ContainerDied","Data":"0f9544d3a9e10d95d303c6ebb7e711a8e223f4070572b994ed7bda09d182e8d3"} Jan 26 13:20:16 crc kubenswrapper[4844]: I0126 13:20:16.910878 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2","Type":"ContainerStarted","Data":"7f1ea68571e5a9daeb4dc8f7339cd361453f01ff144f8bb1af3c8316968318f8"} Jan 26 13:20:16 crc kubenswrapper[4844]: I0126 13:20:16.962304 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.962285164 podStartE2EDuration="4.962285164s" podCreationTimestamp="2026-01-26 13:20:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:20:16.93936287 +0000 UTC m=+2193.872730482" watchObservedRunningTime="2026-01-26 13:20:16.962285164 +0000 UTC m=+2193.895652776" Jan 26 13:20:17 crc kubenswrapper[4844]: E0126 13:20:17.059336 4844 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef403703_395e_4db1_a9f5_a8e011e39ff2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb28528a5_6d16_4775_89eb_5f0e00b4afd1.slice/crio-b42db70088eb86e462bac2a7ee6c35302dc3fe653ff83e9ba986edda37f0ff80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb28528a5_6d16_4775_89eb_5f0e00b4afd1.slice/crio-conmon-b42db70088eb86e462bac2a7ee6c35302dc3fe653ff83e9ba986edda37f0ff80.scope\": RecentStats: unable to find data in memory cache]" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.223849 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5db4cb7f67-85gvs" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.326363 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="188e9259-51a6-4775-a1a5-ccf2f736513c" path="/var/lib/kubelet/pods/188e9259-51a6-4775-a1a5-ccf2f736513c/volumes" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.412461 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.426154 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7ff9fb4f5b-dz4mq" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.546556 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.600021 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.634026 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-combined-ca-bundle\") pod \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.634094 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b28528a5-6d16-4775-89eb-5f0e00b4afd1-etc-machine-id\") pod \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.634175 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-scripts\") pod \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.634316 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-config-data-custom\") pod \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.634461 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-config-data\") pod \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.634496 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-284bj\" (UniqueName: \"kubernetes.io/projected/b28528a5-6d16-4775-89eb-5f0e00b4afd1-kube-api-access-284bj\") pod \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\" (UID: \"b28528a5-6d16-4775-89eb-5f0e00b4afd1\") " Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.636708 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b28528a5-6d16-4775-89eb-5f0e00b4afd1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b28528a5-6d16-4775-89eb-5f0e00b4afd1" (UID: "b28528a5-6d16-4775-89eb-5f0e00b4afd1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.640980 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-scripts" (OuterVolumeSpecName: "scripts") pod "b28528a5-6d16-4775-89eb-5f0e00b4afd1" (UID: "b28528a5-6d16-4775-89eb-5f0e00b4afd1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.646776 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b28528a5-6d16-4775-89eb-5f0e00b4afd1-kube-api-access-284bj" (OuterVolumeSpecName: "kube-api-access-284bj") pod "b28528a5-6d16-4775-89eb-5f0e00b4afd1" (UID: "b28528a5-6d16-4775-89eb-5f0e00b4afd1"). InnerVolumeSpecName "kube-api-access-284bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.646929 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b28528a5-6d16-4775-89eb-5f0e00b4afd1" (UID: "b28528a5-6d16-4775-89eb-5f0e00b4afd1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.736732 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b28528a5-6d16-4775-89eb-5f0e00b4afd1" (UID: "b28528a5-6d16-4775-89eb-5f0e00b4afd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.737321 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-284bj\" (UniqueName: \"kubernetes.io/projected/b28528a5-6d16-4775-89eb-5f0e00b4afd1-kube-api-access-284bj\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.737340 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.737350 4844 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b28528a5-6d16-4775-89eb-5f0e00b4afd1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.737360 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.737368 4844 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.754418 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-config-data" (OuterVolumeSpecName: "config-data") pod "b28528a5-6d16-4775-89eb-5f0e00b4afd1" (UID: "b28528a5-6d16-4775-89eb-5f0e00b4afd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.838763 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b28528a5-6d16-4775-89eb-5f0e00b4afd1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.933739 4844 generic.go:334] "Generic (PLEG): container finished" podID="b28528a5-6d16-4775-89eb-5f0e00b4afd1" containerID="b42db70088eb86e462bac2a7ee6c35302dc3fe653ff83e9ba986edda37f0ff80" exitCode=0 Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.933807 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.933858 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b28528a5-6d16-4775-89eb-5f0e00b4afd1","Type":"ContainerDied","Data":"b42db70088eb86e462bac2a7ee6c35302dc3fe653ff83e9ba986edda37f0ff80"} Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.933904 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b28528a5-6d16-4775-89eb-5f0e00b4afd1","Type":"ContainerDied","Data":"aa700d4a4bb2c55d72a39a9367a812b8e0f35bcd3e8692e49743697d7d1b7b4a"} Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.933929 4844 scope.go:117] "RemoveContainer" containerID="0f9544d3a9e10d95d303c6ebb7e711a8e223f4070572b994ed7bda09d182e8d3" Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.934044 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nw6hp" podUID="0b53d3b2-56e9-427c-8dcd-e5487cecc4f9" containerName="registry-server" containerID="cri-o://bcf0c9a16d391de4b7cd2111db5acf9a272fc80e3f14b95cef9bee0159b8ed9d" gracePeriod=2 Jan 26 13:20:17 crc kubenswrapper[4844]: I0126 13:20:17.995033 4844 scope.go:117] "RemoveContainer" containerID="b42db70088eb86e462bac2a7ee6c35302dc3fe653ff83e9ba986edda37f0ff80" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.009933 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.022871 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.038749 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 13:20:18 crc kubenswrapper[4844]: E0126 13:20:18.039173 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b28528a5-6d16-4775-89eb-5f0e00b4afd1" containerName="cinder-scheduler" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.039195 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28528a5-6d16-4775-89eb-5f0e00b4afd1" containerName="cinder-scheduler" Jan 26 13:20:18 crc kubenswrapper[4844]: E0126 13:20:18.039207 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b28528a5-6d16-4775-89eb-5f0e00b4afd1" containerName="probe" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.039214 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28528a5-6d16-4775-89eb-5f0e00b4afd1" containerName="probe" Jan 26 13:20:18 crc kubenswrapper[4844]: E0126 13:20:18.039236 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188e9259-51a6-4775-a1a5-ccf2f736513c" containerName="init" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.039243 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="188e9259-51a6-4775-a1a5-ccf2f736513c" containerName="init" Jan 26 13:20:18 crc kubenswrapper[4844]: E0126 13:20:18.039259 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188e9259-51a6-4775-a1a5-ccf2f736513c" containerName="dnsmasq-dns" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.039267 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="188e9259-51a6-4775-a1a5-ccf2f736513c" containerName="dnsmasq-dns" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.039445 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="b28528a5-6d16-4775-89eb-5f0e00b4afd1" containerName="probe" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.039473 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="b28528a5-6d16-4775-89eb-5f0e00b4afd1" containerName="cinder-scheduler" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.039490 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="188e9259-51a6-4775-a1a5-ccf2f736513c" containerName="dnsmasq-dns" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.044972 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.051961 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.089453 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.129295 4844 scope.go:117] "RemoveContainer" containerID="0f9544d3a9e10d95d303c6ebb7e711a8e223f4070572b994ed7bda09d182e8d3" Jan 26 13:20:18 crc kubenswrapper[4844]: E0126 13:20:18.130140 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f9544d3a9e10d95d303c6ebb7e711a8e223f4070572b994ed7bda09d182e8d3\": container with ID starting with 0f9544d3a9e10d95d303c6ebb7e711a8e223f4070572b994ed7bda09d182e8d3 not found: ID does not exist" containerID="0f9544d3a9e10d95d303c6ebb7e711a8e223f4070572b994ed7bda09d182e8d3" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.130177 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f9544d3a9e10d95d303c6ebb7e711a8e223f4070572b994ed7bda09d182e8d3"} err="failed to get container status \"0f9544d3a9e10d95d303c6ebb7e711a8e223f4070572b994ed7bda09d182e8d3\": rpc error: code = NotFound desc = could not find container \"0f9544d3a9e10d95d303c6ebb7e711a8e223f4070572b994ed7bda09d182e8d3\": container with ID starting with 0f9544d3a9e10d95d303c6ebb7e711a8e223f4070572b994ed7bda09d182e8d3 not found: ID does not exist" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.130204 4844 scope.go:117] "RemoveContainer" containerID="b42db70088eb86e462bac2a7ee6c35302dc3fe653ff83e9ba986edda37f0ff80" Jan 26 13:20:18 crc kubenswrapper[4844]: E0126 13:20:18.130551 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b42db70088eb86e462bac2a7ee6c35302dc3fe653ff83e9ba986edda37f0ff80\": container with ID starting with b42db70088eb86e462bac2a7ee6c35302dc3fe653ff83e9ba986edda37f0ff80 not found: ID does not exist" containerID="b42db70088eb86e462bac2a7ee6c35302dc3fe653ff83e9ba986edda37f0ff80" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.130570 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b42db70088eb86e462bac2a7ee6c35302dc3fe653ff83e9ba986edda37f0ff80"} err="failed to get container status \"b42db70088eb86e462bac2a7ee6c35302dc3fe653ff83e9ba986edda37f0ff80\": rpc error: code = NotFound desc = could not find container \"b42db70088eb86e462bac2a7ee6c35302dc3fe653ff83e9ba986edda37f0ff80\": container with ID starting with b42db70088eb86e462bac2a7ee6c35302dc3fe653ff83e9ba986edda37f0ff80 not found: ID does not exist" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.160177 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47c752dd-0b96-464c-9cb4-3251fc31556a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"47c752dd-0b96-464c-9cb4-3251fc31556a\") " pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.160261 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqcqr\" (UniqueName: \"kubernetes.io/projected/47c752dd-0b96-464c-9cb4-3251fc31556a-kube-api-access-rqcqr\") pod \"cinder-scheduler-0\" (UID: \"47c752dd-0b96-464c-9cb4-3251fc31556a\") " pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.160302 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47c752dd-0b96-464c-9cb4-3251fc31556a-scripts\") pod \"cinder-scheduler-0\" (UID: \"47c752dd-0b96-464c-9cb4-3251fc31556a\") " pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.160341 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47c752dd-0b96-464c-9cb4-3251fc31556a-config-data\") pod \"cinder-scheduler-0\" (UID: \"47c752dd-0b96-464c-9cb4-3251fc31556a\") " pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.160359 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47c752dd-0b96-464c-9cb4-3251fc31556a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"47c752dd-0b96-464c-9cb4-3251fc31556a\") " pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.160403 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47c752dd-0b96-464c-9cb4-3251fc31556a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"47c752dd-0b96-464c-9cb4-3251fc31556a\") " pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.261842 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqcqr\" (UniqueName: \"kubernetes.io/projected/47c752dd-0b96-464c-9cb4-3251fc31556a-kube-api-access-rqcqr\") pod \"cinder-scheduler-0\" (UID: \"47c752dd-0b96-464c-9cb4-3251fc31556a\") " pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.261905 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47c752dd-0b96-464c-9cb4-3251fc31556a-scripts\") pod \"cinder-scheduler-0\" (UID: \"47c752dd-0b96-464c-9cb4-3251fc31556a\") " pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.261962 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47c752dd-0b96-464c-9cb4-3251fc31556a-config-data\") pod \"cinder-scheduler-0\" (UID: \"47c752dd-0b96-464c-9cb4-3251fc31556a\") " pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.261987 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47c752dd-0b96-464c-9cb4-3251fc31556a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"47c752dd-0b96-464c-9cb4-3251fc31556a\") " pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.262050 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47c752dd-0b96-464c-9cb4-3251fc31556a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"47c752dd-0b96-464c-9cb4-3251fc31556a\") " pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.262133 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47c752dd-0b96-464c-9cb4-3251fc31556a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"47c752dd-0b96-464c-9cb4-3251fc31556a\") " pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.265731 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47c752dd-0b96-464c-9cb4-3251fc31556a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"47c752dd-0b96-464c-9cb4-3251fc31556a\") " pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.269459 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47c752dd-0b96-464c-9cb4-3251fc31556a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"47c752dd-0b96-464c-9cb4-3251fc31556a\") " pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.274237 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47c752dd-0b96-464c-9cb4-3251fc31556a-scripts\") pod \"cinder-scheduler-0\" (UID: \"47c752dd-0b96-464c-9cb4-3251fc31556a\") " pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.274239 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47c752dd-0b96-464c-9cb4-3251fc31556a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"47c752dd-0b96-464c-9cb4-3251fc31556a\") " pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.274906 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47c752dd-0b96-464c-9cb4-3251fc31556a-config-data\") pod \"cinder-scheduler-0\" (UID: \"47c752dd-0b96-464c-9cb4-3251fc31556a\") " pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.292248 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqcqr\" (UniqueName: \"kubernetes.io/projected/47c752dd-0b96-464c-9cb4-3251fc31556a-kube-api-access-rqcqr\") pod \"cinder-scheduler-0\" (UID: \"47c752dd-0b96-464c-9cb4-3251fc31556a\") " pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.415546 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.428140 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nw6hp" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.569103 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69b6s\" (UniqueName: \"kubernetes.io/projected/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9-kube-api-access-69b6s\") pod \"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9\" (UID: \"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9\") " Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.569441 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9-catalog-content\") pod \"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9\" (UID: \"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9\") " Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.569497 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9-utilities\") pod \"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9\" (UID: \"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9\") " Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.570439 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9-utilities" (OuterVolumeSpecName: "utilities") pod "0b53d3b2-56e9-427c-8dcd-e5487cecc4f9" (UID: "0b53d3b2-56e9-427c-8dcd-e5487cecc4f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.580170 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9-kube-api-access-69b6s" (OuterVolumeSpecName: "kube-api-access-69b6s") pod "0b53d3b2-56e9-427c-8dcd-e5487cecc4f9" (UID: "0b53d3b2-56e9-427c-8dcd-e5487cecc4f9"). InnerVolumeSpecName "kube-api-access-69b6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.599576 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b53d3b2-56e9-427c-8dcd-e5487cecc4f9" (UID: "0b53d3b2-56e9-427c-8dcd-e5487cecc4f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.670944 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69b6s\" (UniqueName: \"kubernetes.io/projected/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9-kube-api-access-69b6s\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.670967 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.670976 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:18 crc kubenswrapper[4844]: W0126 13:20:18.900233 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47c752dd_0b96_464c_9cb4_3251fc31556a.slice/crio-12049ca3d63f4ebd6da55e3b10a5df2c96c9b4534bf8dc2c701c5cb45921edb8 WatchSource:0}: Error finding container 12049ca3d63f4ebd6da55e3b10a5df2c96c9b4534bf8dc2c701c5cb45921edb8: Status 404 returned error can't find the container with id 12049ca3d63f4ebd6da55e3b10a5df2c96c9b4534bf8dc2c701c5cb45921edb8 Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.907996 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.942571 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"47c752dd-0b96-464c-9cb4-3251fc31556a","Type":"ContainerStarted","Data":"12049ca3d63f4ebd6da55e3b10a5df2c96c9b4534bf8dc2c701c5cb45921edb8"} Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.944995 4844 generic.go:334] "Generic (PLEG): container finished" podID="0b53d3b2-56e9-427c-8dcd-e5487cecc4f9" containerID="bcf0c9a16d391de4b7cd2111db5acf9a272fc80e3f14b95cef9bee0159b8ed9d" exitCode=0 Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.945019 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nw6hp" event={"ID":"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9","Type":"ContainerDied","Data":"bcf0c9a16d391de4b7cd2111db5acf9a272fc80e3f14b95cef9bee0159b8ed9d"} Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.945035 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nw6hp" event={"ID":"0b53d3b2-56e9-427c-8dcd-e5487cecc4f9","Type":"ContainerDied","Data":"9677f88cf6b0406282608b08bc0f7c519bd58eee9ea0ec470895cecad1953d8b"} Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.945052 4844 scope.go:117] "RemoveContainer" containerID="bcf0c9a16d391de4b7cd2111db5acf9a272fc80e3f14b95cef9bee0159b8ed9d" Jan 26 13:20:18 crc kubenswrapper[4844]: I0126 13:20:18.945160 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nw6hp" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.097397 4844 scope.go:117] "RemoveContainer" containerID="b5e373fffb5472440ecabbaaffb0660f5b1e6bfe3a354b171f3436e9f8a16ba5" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.113104 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nw6hp"] Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.119568 4844 scope.go:117] "RemoveContainer" containerID="9bdda0f2b4232779dc7c4dc8a126055439e68d05d737b326c6bcb69cd3f3a1b2" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.123027 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nw6hp"] Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.170164 4844 scope.go:117] "RemoveContainer" containerID="bcf0c9a16d391de4b7cd2111db5acf9a272fc80e3f14b95cef9bee0159b8ed9d" Jan 26 13:20:19 crc kubenswrapper[4844]: E0126 13:20:19.171004 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcf0c9a16d391de4b7cd2111db5acf9a272fc80e3f14b95cef9bee0159b8ed9d\": container with ID starting with bcf0c9a16d391de4b7cd2111db5acf9a272fc80e3f14b95cef9bee0159b8ed9d not found: ID does not exist" containerID="bcf0c9a16d391de4b7cd2111db5acf9a272fc80e3f14b95cef9bee0159b8ed9d" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.171049 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcf0c9a16d391de4b7cd2111db5acf9a272fc80e3f14b95cef9bee0159b8ed9d"} err="failed to get container status \"bcf0c9a16d391de4b7cd2111db5acf9a272fc80e3f14b95cef9bee0159b8ed9d\": rpc error: code = NotFound desc = could not find container \"bcf0c9a16d391de4b7cd2111db5acf9a272fc80e3f14b95cef9bee0159b8ed9d\": container with ID starting with bcf0c9a16d391de4b7cd2111db5acf9a272fc80e3f14b95cef9bee0159b8ed9d not found: ID does not exist" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.171074 4844 scope.go:117] "RemoveContainer" containerID="b5e373fffb5472440ecabbaaffb0660f5b1e6bfe3a354b171f3436e9f8a16ba5" Jan 26 13:20:19 crc kubenswrapper[4844]: E0126 13:20:19.171748 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5e373fffb5472440ecabbaaffb0660f5b1e6bfe3a354b171f3436e9f8a16ba5\": container with ID starting with b5e373fffb5472440ecabbaaffb0660f5b1e6bfe3a354b171f3436e9f8a16ba5 not found: ID does not exist" containerID="b5e373fffb5472440ecabbaaffb0660f5b1e6bfe3a354b171f3436e9f8a16ba5" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.171787 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5e373fffb5472440ecabbaaffb0660f5b1e6bfe3a354b171f3436e9f8a16ba5"} err="failed to get container status \"b5e373fffb5472440ecabbaaffb0660f5b1e6bfe3a354b171f3436e9f8a16ba5\": rpc error: code = NotFound desc = could not find container \"b5e373fffb5472440ecabbaaffb0660f5b1e6bfe3a354b171f3436e9f8a16ba5\": container with ID starting with b5e373fffb5472440ecabbaaffb0660f5b1e6bfe3a354b171f3436e9f8a16ba5 not found: ID does not exist" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.171808 4844 scope.go:117] "RemoveContainer" containerID="9bdda0f2b4232779dc7c4dc8a126055439e68d05d737b326c6bcb69cd3f3a1b2" Jan 26 13:20:19 crc kubenswrapper[4844]: E0126 13:20:19.172176 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bdda0f2b4232779dc7c4dc8a126055439e68d05d737b326c6bcb69cd3f3a1b2\": container with ID starting with 9bdda0f2b4232779dc7c4dc8a126055439e68d05d737b326c6bcb69cd3f3a1b2 not found: ID does not exist" containerID="9bdda0f2b4232779dc7c4dc8a126055439e68d05d737b326c6bcb69cd3f3a1b2" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.172209 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bdda0f2b4232779dc7c4dc8a126055439e68d05d737b326c6bcb69cd3f3a1b2"} err="failed to get container status \"9bdda0f2b4232779dc7c4dc8a126055439e68d05d737b326c6bcb69cd3f3a1b2\": rpc error: code = NotFound desc = could not find container \"9bdda0f2b4232779dc7c4dc8a126055439e68d05d737b326c6bcb69cd3f3a1b2\": container with ID starting with 9bdda0f2b4232779dc7c4dc8a126055439e68d05d737b326c6bcb69cd3f3a1b2 not found: ID does not exist" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.268446 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 13:20:19 crc kubenswrapper[4844]: E0126 13:20:19.269928 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b53d3b2-56e9-427c-8dcd-e5487cecc4f9" containerName="registry-server" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.269975 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b53d3b2-56e9-427c-8dcd-e5487cecc4f9" containerName="registry-server" Jan 26 13:20:19 crc kubenswrapper[4844]: E0126 13:20:19.270005 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b53d3b2-56e9-427c-8dcd-e5487cecc4f9" containerName="extract-content" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.270014 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b53d3b2-56e9-427c-8dcd-e5487cecc4f9" containerName="extract-content" Jan 26 13:20:19 crc kubenswrapper[4844]: E0126 13:20:19.270065 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b53d3b2-56e9-427c-8dcd-e5487cecc4f9" containerName="extract-utilities" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.270073 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b53d3b2-56e9-427c-8dcd-e5487cecc4f9" containerName="extract-utilities" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.270546 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b53d3b2-56e9-427c-8dcd-e5487cecc4f9" containerName="registry-server" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.271763 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.275946 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.276105 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.276260 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-k2xjg" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.283202 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.327030 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b53d3b2-56e9-427c-8dcd-e5487cecc4f9" path="/var/lib/kubelet/pods/0b53d3b2-56e9-427c-8dcd-e5487cecc4f9/volumes" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.328208 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b28528a5-6d16-4775-89eb-5f0e00b4afd1" path="/var/lib/kubelet/pods/b28528a5-6d16-4775-89eb-5f0e00b4afd1/volumes" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.391368 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d831cf25-12e3-4375-88ae-4ce13c139248-openstack-config\") pod \"openstackclient\" (UID: \"d831cf25-12e3-4375-88ae-4ce13c139248\") " pod="openstack/openstackclient" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.391440 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d831cf25-12e3-4375-88ae-4ce13c139248-openstack-config-secret\") pod \"openstackclient\" (UID: \"d831cf25-12e3-4375-88ae-4ce13c139248\") " pod="openstack/openstackclient" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.391521 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d831cf25-12e3-4375-88ae-4ce13c139248-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d831cf25-12e3-4375-88ae-4ce13c139248\") " pod="openstack/openstackclient" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.391653 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22jnr\" (UniqueName: \"kubernetes.io/projected/d831cf25-12e3-4375-88ae-4ce13c139248-kube-api-access-22jnr\") pod \"openstackclient\" (UID: \"d831cf25-12e3-4375-88ae-4ce13c139248\") " pod="openstack/openstackclient" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.493023 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22jnr\" (UniqueName: \"kubernetes.io/projected/d831cf25-12e3-4375-88ae-4ce13c139248-kube-api-access-22jnr\") pod \"openstackclient\" (UID: \"d831cf25-12e3-4375-88ae-4ce13c139248\") " pod="openstack/openstackclient" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.493109 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d831cf25-12e3-4375-88ae-4ce13c139248-openstack-config\") pod \"openstackclient\" (UID: \"d831cf25-12e3-4375-88ae-4ce13c139248\") " pod="openstack/openstackclient" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.493156 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d831cf25-12e3-4375-88ae-4ce13c139248-openstack-config-secret\") pod \"openstackclient\" (UID: \"d831cf25-12e3-4375-88ae-4ce13c139248\") " pod="openstack/openstackclient" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.493244 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d831cf25-12e3-4375-88ae-4ce13c139248-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d831cf25-12e3-4375-88ae-4ce13c139248\") " pod="openstack/openstackclient" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.494569 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d831cf25-12e3-4375-88ae-4ce13c139248-openstack-config\") pod \"openstackclient\" (UID: \"d831cf25-12e3-4375-88ae-4ce13c139248\") " pod="openstack/openstackclient" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.499025 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d831cf25-12e3-4375-88ae-4ce13c139248-openstack-config-secret\") pod \"openstackclient\" (UID: \"d831cf25-12e3-4375-88ae-4ce13c139248\") " pod="openstack/openstackclient" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.499109 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d831cf25-12e3-4375-88ae-4ce13c139248-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d831cf25-12e3-4375-88ae-4ce13c139248\") " pod="openstack/openstackclient" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.511931 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22jnr\" (UniqueName: \"kubernetes.io/projected/d831cf25-12e3-4375-88ae-4ce13c139248-kube-api-access-22jnr\") pod \"openstackclient\" (UID: \"d831cf25-12e3-4375-88ae-4ce13c139248\") " pod="openstack/openstackclient" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.597011 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 13:20:19 crc kubenswrapper[4844]: I0126 13:20:19.958694 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"47c752dd-0b96-464c-9cb4-3251fc31556a","Type":"ContainerStarted","Data":"98d61eb2193d3e6b959085c047c38a9f9f2aff50d494b91e53cf801c0e11362e"} Jan 26 13:20:20 crc kubenswrapper[4844]: I0126 13:20:20.049222 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 13:20:20 crc kubenswrapper[4844]: W0126 13:20:20.060998 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd831cf25_12e3_4375_88ae_4ce13c139248.slice/crio-1c2f5b077d739f2e200d841888d821584b8cbd5d453fe789556f1c3a3a375db0 WatchSource:0}: Error finding container 1c2f5b077d739f2e200d841888d821584b8cbd5d453fe789556f1c3a3a375db0: Status 404 returned error can't find the container with id 1c2f5b077d739f2e200d841888d821584b8cbd5d453fe789556f1c3a3a375db0 Jan 26 13:20:20 crc kubenswrapper[4844]: I0126 13:20:20.980018 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d831cf25-12e3-4375-88ae-4ce13c139248","Type":"ContainerStarted","Data":"1c2f5b077d739f2e200d841888d821584b8cbd5d453fe789556f1c3a3a375db0"} Jan 26 13:20:20 crc kubenswrapper[4844]: I0126 13:20:20.985364 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"47c752dd-0b96-464c-9cb4-3251fc31556a","Type":"ContainerStarted","Data":"557ee0dc1140efede93027993a6a3708c5d4271ef3832480867a635119ce3444"} Jan 26 13:20:21 crc kubenswrapper[4844]: I0126 13:20:21.014496 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.014469488 podStartE2EDuration="3.014469488s" podCreationTimestamp="2026-01-26 13:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:20:21.002666133 +0000 UTC m=+2197.936033805" watchObservedRunningTime="2026-01-26 13:20:21.014469488 +0000 UTC m=+2197.947837130" Jan 26 13:20:23 crc kubenswrapper[4844]: I0126 13:20:23.235900 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 13:20:23 crc kubenswrapper[4844]: I0126 13:20:23.236929 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 13:20:23 crc kubenswrapper[4844]: I0126 13:20:23.266201 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 13:20:23 crc kubenswrapper[4844]: I0126 13:20:23.282036 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 13:20:23 crc kubenswrapper[4844]: I0126 13:20:23.305061 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:23 crc kubenswrapper[4844]: I0126 13:20:23.305354 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:23 crc kubenswrapper[4844]: I0126 13:20:23.341883 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:23 crc kubenswrapper[4844]: I0126 13:20:23.365506 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:23 crc kubenswrapper[4844]: I0126 13:20:23.415844 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 13:20:24 crc kubenswrapper[4844]: I0126 13:20:24.018943 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 13:20:24 crc kubenswrapper[4844]: I0126 13:20:24.019356 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:24 crc kubenswrapper[4844]: I0126 13:20:24.019379 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:24 crc kubenswrapper[4844]: I0126 13:20:24.019393 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 13:20:24 crc kubenswrapper[4844]: I0126 13:20:24.313083 4844 scope.go:117] "RemoveContainer" containerID="f40661e9cae1344ff8df85b9eb11c5a53401a5c8932da25e88f55fc3d9a6f8f8" Jan 26 13:20:24 crc kubenswrapper[4844]: E0126 13:20:24.313283 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ed782618-8b69-4456-9aec-5184e765960f)\"" pod="openstack/watcher-decision-engine-0" podUID="ed782618-8b69-4456-9aec-5184e765960f" Jan 26 13:20:25 crc kubenswrapper[4844]: I0126 13:20:25.912880 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-f984df9c6-m8lct" podUID="2f336c66-c9c1-4764-8f55-a6fd70f01790" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Jan 26 13:20:25 crc kubenswrapper[4844]: I0126 13:20:25.913565 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.234755 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5d969b7b55-l9p8p"] Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.236623 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.238874 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.244784 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.247219 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5d969b7b55-l9p8p"] Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.248511 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.337138 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbzfh\" (UniqueName: \"kubernetes.io/projected/e8e7e0c6-a150-4957-8e36-2f75d269e203-kube-api-access-zbzfh\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.337413 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e7e0c6-a150-4957-8e36-2f75d269e203-combined-ca-bundle\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.337451 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e8e7e0c6-a150-4957-8e36-2f75d269e203-etc-swift\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.337497 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8e7e0c6-a150-4957-8e36-2f75d269e203-log-httpd\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.337574 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e7e0c6-a150-4957-8e36-2f75d269e203-internal-tls-certs\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.337633 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e7e0c6-a150-4957-8e36-2f75d269e203-config-data\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.337665 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e7e0c6-a150-4957-8e36-2f75d269e203-public-tls-certs\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.337699 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8e7e0c6-a150-4957-8e36-2f75d269e203-run-httpd\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.438561 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e7e0c6-a150-4957-8e36-2f75d269e203-public-tls-certs\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.438681 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8e7e0c6-a150-4957-8e36-2f75d269e203-run-httpd\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.438717 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbzfh\" (UniqueName: \"kubernetes.io/projected/e8e7e0c6-a150-4957-8e36-2f75d269e203-kube-api-access-zbzfh\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.438750 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e7e0c6-a150-4957-8e36-2f75d269e203-combined-ca-bundle\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.438774 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e8e7e0c6-a150-4957-8e36-2f75d269e203-etc-swift\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.438813 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8e7e0c6-a150-4957-8e36-2f75d269e203-log-httpd\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.438866 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e7e0c6-a150-4957-8e36-2f75d269e203-internal-tls-certs\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.438899 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e7e0c6-a150-4957-8e36-2f75d269e203-config-data\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.443036 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8e7e0c6-a150-4957-8e36-2f75d269e203-log-httpd\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.443087 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8e7e0c6-a150-4957-8e36-2f75d269e203-run-httpd\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.445397 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e7e0c6-a150-4957-8e36-2f75d269e203-combined-ca-bundle\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.450033 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e7e0c6-a150-4957-8e36-2f75d269e203-public-tls-certs\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.450233 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e7e0c6-a150-4957-8e36-2f75d269e203-internal-tls-certs\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.450472 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e7e0c6-a150-4957-8e36-2f75d269e203-config-data\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.454528 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e8e7e0c6-a150-4957-8e36-2f75d269e203-etc-swift\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.467113 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbzfh\" (UniqueName: \"kubernetes.io/projected/e8e7e0c6-a150-4957-8e36-2f75d269e203-kube-api-access-zbzfh\") pod \"swift-proxy-5d969b7b55-l9p8p\" (UID: \"e8e7e0c6-a150-4957-8e36-2f75d269e203\") " pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:26 crc kubenswrapper[4844]: I0126 13:20:26.554161 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:27 crc kubenswrapper[4844]: I0126 13:20:27.011774 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:27 crc kubenswrapper[4844]: I0126 13:20:27.011864 4844 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 13:20:27 crc kubenswrapper[4844]: I0126 13:20:27.025267 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 13:20:27 crc kubenswrapper[4844]: I0126 13:20:27.025368 4844 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 13:20:27 crc kubenswrapper[4844]: I0126 13:20:27.079180 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:27 crc kubenswrapper[4844]: I0126 13:20:27.154811 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 13:20:27 crc kubenswrapper[4844]: E0126 13:20:27.439308 4844 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef403703_395e_4db1_a9f5_a8e011e39ff2.slice\": RecentStats: unable to find data in memory cache]" Jan 26 13:20:28 crc kubenswrapper[4844]: I0126 13:20:28.640263 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 13:20:30 crc kubenswrapper[4844]: I0126 13:20:30.612168 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 13:20:30 crc kubenswrapper[4844]: I0126 13:20:30.617508 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:31 crc kubenswrapper[4844]: W0126 13:20:31.397877 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8e7e0c6_a150_4957_8e36_2f75d269e203.slice/crio-4d73442e9b6e0c1a416af4ccd77c6cf6b938c548efe658166f6770b24f65985c WatchSource:0}: Error finding container 4d73442e9b6e0c1a416af4ccd77c6cf6b938c548efe658166f6770b24f65985c: Status 404 returned error can't find the container with id 4d73442e9b6e0c1a416af4ccd77c6cf6b938c548efe658166f6770b24f65985c Jan 26 13:20:31 crc kubenswrapper[4844]: I0126 13:20:31.407256 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5d969b7b55-l9p8p"] Jan 26 13:20:31 crc kubenswrapper[4844]: I0126 13:20:31.428662 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:20:31 crc kubenswrapper[4844]: I0126 13:20:31.428896 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f8576337-1537-4b93-8d68-829d6bdb8a44" containerName="glance-log" containerID="cri-o://f146d5e904901fde282f115e9db4329dac1a4682ce347aee3a96bc73d6a73f52" gracePeriod=30 Jan 26 13:20:31 crc kubenswrapper[4844]: I0126 13:20:31.429341 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f8576337-1537-4b93-8d68-829d6bdb8a44" containerName="glance-httpd" containerID="cri-o://76d186df983a4311c39bb131d05901edbae1caed5ec77d7ca63e96878e3cee73" gracePeriod=30 Jan 26 13:20:31 crc kubenswrapper[4844]: I0126 13:20:31.833137 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:20:31 crc kubenswrapper[4844]: I0126 13:20:31.833371 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="388147f6-5b13-4111-9d1f-fe317038852d" containerName="ceilometer-central-agent" containerID="cri-o://42495dda29d29e62f3e3e9573d76c490c019e49f761a8cb521a79411ec5a1ac3" gracePeriod=30 Jan 26 13:20:31 crc kubenswrapper[4844]: I0126 13:20:31.833766 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="388147f6-5b13-4111-9d1f-fe317038852d" containerName="proxy-httpd" containerID="cri-o://652384a1a113107dec6c823ed50bdaaad3c621f614e3593b9879c6365df3e8c0" gracePeriod=30 Jan 26 13:20:31 crc kubenswrapper[4844]: I0126 13:20:31.833807 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="388147f6-5b13-4111-9d1f-fe317038852d" containerName="sg-core" containerID="cri-o://c63f9648a87ce352c12d0c8a5c8ab3586be5e8ccaa9d12b3be1eb58e72199be6" gracePeriod=30 Jan 26 13:20:31 crc kubenswrapper[4844]: I0126 13:20:31.833843 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="388147f6-5b13-4111-9d1f-fe317038852d" containerName="ceilometer-notification-agent" containerID="cri-o://622c5f1bda16149b59c0bc280898bd71a037aa92dba8fea1e55f57776f6eaa73" gracePeriod=30 Jan 26 13:20:32 crc kubenswrapper[4844]: I0126 13:20:32.016424 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 13:20:32 crc kubenswrapper[4844]: I0126 13:20:32.016480 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 13:20:32 crc kubenswrapper[4844]: I0126 13:20:32.017151 4844 scope.go:117] "RemoveContainer" containerID="f40661e9cae1344ff8df85b9eb11c5a53401a5c8932da25e88f55fc3d9a6f8f8" Jan 26 13:20:32 crc kubenswrapper[4844]: I0126 13:20:32.129028 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d831cf25-12e3-4375-88ae-4ce13c139248","Type":"ContainerStarted","Data":"5a57568469634c050ea411a579898eecd98c528fc3070ac32e8f23a718c3004b"} Jan 26 13:20:32 crc kubenswrapper[4844]: I0126 13:20:32.134586 4844 generic.go:334] "Generic (PLEG): container finished" podID="388147f6-5b13-4111-9d1f-fe317038852d" containerID="c63f9648a87ce352c12d0c8a5c8ab3586be5e8ccaa9d12b3be1eb58e72199be6" exitCode=2 Jan 26 13:20:32 crc kubenswrapper[4844]: I0126 13:20:32.134676 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"388147f6-5b13-4111-9d1f-fe317038852d","Type":"ContainerDied","Data":"c63f9648a87ce352c12d0c8a5c8ab3586be5e8ccaa9d12b3be1eb58e72199be6"} Jan 26 13:20:32 crc kubenswrapper[4844]: I0126 13:20:32.137324 4844 generic.go:334] "Generic (PLEG): container finished" podID="f8576337-1537-4b93-8d68-829d6bdb8a44" containerID="f146d5e904901fde282f115e9db4329dac1a4682ce347aee3a96bc73d6a73f52" exitCode=143 Jan 26 13:20:32 crc kubenswrapper[4844]: I0126 13:20:32.137398 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f8576337-1537-4b93-8d68-829d6bdb8a44","Type":"ContainerDied","Data":"f146d5e904901fde282f115e9db4329dac1a4682ce347aee3a96bc73d6a73f52"} Jan 26 13:20:32 crc kubenswrapper[4844]: I0126 13:20:32.139514 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d969b7b55-l9p8p" event={"ID":"e8e7e0c6-a150-4957-8e36-2f75d269e203","Type":"ContainerStarted","Data":"29c941cd6e218ebc1849400796042705e4ae81afb510dc3141a09f96103d5687"} Jan 26 13:20:32 crc kubenswrapper[4844]: I0126 13:20:32.139647 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d969b7b55-l9p8p" event={"ID":"e8e7e0c6-a150-4957-8e36-2f75d269e203","Type":"ContainerStarted","Data":"604272705ec62f996de604d6bf4cfe59ba0c3ea3c3b9795f41e492e4c6448051"} Jan 26 13:20:32 crc kubenswrapper[4844]: I0126 13:20:32.139724 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d969b7b55-l9p8p" event={"ID":"e8e7e0c6-a150-4957-8e36-2f75d269e203","Type":"ContainerStarted","Data":"4d73442e9b6e0c1a416af4ccd77c6cf6b938c548efe658166f6770b24f65985c"} Jan 26 13:20:32 crc kubenswrapper[4844]: I0126 13:20:32.139803 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:32 crc kubenswrapper[4844]: I0126 13:20:32.147519 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.386691388 podStartE2EDuration="13.147502448s" podCreationTimestamp="2026-01-26 13:20:19 +0000 UTC" firstStartedPulling="2026-01-26 13:20:20.065753483 +0000 UTC m=+2196.999121095" lastFinishedPulling="2026-01-26 13:20:30.826564543 +0000 UTC m=+2207.759932155" observedRunningTime="2026-01-26 13:20:32.144351471 +0000 UTC m=+2209.077719103" watchObservedRunningTime="2026-01-26 13:20:32.147502448 +0000 UTC m=+2209.080870080" Jan 26 13:20:32 crc kubenswrapper[4844]: I0126 13:20:32.178253 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5d969b7b55-l9p8p" podStartSLOduration=6.17822916 podStartE2EDuration="6.17822916s" podCreationTimestamp="2026-01-26 13:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:20:32.166462816 +0000 UTC m=+2209.099830438" watchObservedRunningTime="2026-01-26 13:20:32.17822916 +0000 UTC m=+2209.111596782" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.207071 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ed782618-8b69-4456-9aec-5184e765960f","Type":"ContainerStarted","Data":"f778593c77f19cd971369cd93f107ce9557b6ff677fcdb7bf966fe9cde611212"} Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.227045 4844 generic.go:334] "Generic (PLEG): container finished" podID="f8576337-1537-4b93-8d68-829d6bdb8a44" containerID="76d186df983a4311c39bb131d05901edbae1caed5ec77d7ca63e96878e3cee73" exitCode=0 Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.227107 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f8576337-1537-4b93-8d68-829d6bdb8a44","Type":"ContainerDied","Data":"76d186df983a4311c39bb131d05901edbae1caed5ec77d7ca63e96878e3cee73"} Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.234891 4844 generic.go:334] "Generic (PLEG): container finished" podID="2f336c66-c9c1-4764-8f55-a6fd70f01790" containerID="b4a28fc027238c2c642ef160a8fb190c22d5b2b5a5c62897b96d66146b947b9e" exitCode=137 Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.234945 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f984df9c6-m8lct" event={"ID":"2f336c66-c9c1-4764-8f55-a6fd70f01790","Type":"ContainerDied","Data":"b4a28fc027238c2c642ef160a8fb190c22d5b2b5a5c62897b96d66146b947b9e"} Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.260101 4844 generic.go:334] "Generic (PLEG): container finished" podID="388147f6-5b13-4111-9d1f-fe317038852d" containerID="652384a1a113107dec6c823ed50bdaaad3c621f614e3593b9879c6365df3e8c0" exitCode=0 Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.260143 4844 generic.go:334] "Generic (PLEG): container finished" podID="388147f6-5b13-4111-9d1f-fe317038852d" containerID="42495dda29d29e62f3e3e9573d76c490c019e49f761a8cb521a79411ec5a1ac3" exitCode=0 Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.260939 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"388147f6-5b13-4111-9d1f-fe317038852d","Type":"ContainerDied","Data":"652384a1a113107dec6c823ed50bdaaad3c621f614e3593b9879c6365df3e8c0"} Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.260967 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"388147f6-5b13-4111-9d1f-fe317038852d","Type":"ContainerDied","Data":"42495dda29d29e62f3e3e9573d76c490c019e49f761a8cb521a79411ec5a1ac3"} Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.261397 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.376816 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.492220 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f336c66-c9c1-4764-8f55-a6fd70f01790-config-data\") pod \"2f336c66-c9c1-4764-8f55-a6fd70f01790\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.492569 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz8sp\" (UniqueName: \"kubernetes.io/projected/2f336c66-c9c1-4764-8f55-a6fd70f01790-kube-api-access-qz8sp\") pod \"2f336c66-c9c1-4764-8f55-a6fd70f01790\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.492637 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f336c66-c9c1-4764-8f55-a6fd70f01790-logs\") pod \"2f336c66-c9c1-4764-8f55-a6fd70f01790\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.492767 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f336c66-c9c1-4764-8f55-a6fd70f01790-horizon-tls-certs\") pod \"2f336c66-c9c1-4764-8f55-a6fd70f01790\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.492894 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f336c66-c9c1-4764-8f55-a6fd70f01790-scripts\") pod \"2f336c66-c9c1-4764-8f55-a6fd70f01790\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.492925 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f336c66-c9c1-4764-8f55-a6fd70f01790-horizon-secret-key\") pod \"2f336c66-c9c1-4764-8f55-a6fd70f01790\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.492975 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f336c66-c9c1-4764-8f55-a6fd70f01790-combined-ca-bundle\") pod \"2f336c66-c9c1-4764-8f55-a6fd70f01790\" (UID: \"2f336c66-c9c1-4764-8f55-a6fd70f01790\") " Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.494144 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f336c66-c9c1-4764-8f55-a6fd70f01790-logs" (OuterVolumeSpecName: "logs") pod "2f336c66-c9c1-4764-8f55-a6fd70f01790" (UID: "2f336c66-c9c1-4764-8f55-a6fd70f01790"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.499796 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f336c66-c9c1-4764-8f55-a6fd70f01790-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2f336c66-c9c1-4764-8f55-a6fd70f01790" (UID: "2f336c66-c9c1-4764-8f55-a6fd70f01790"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.499968 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f336c66-c9c1-4764-8f55-a6fd70f01790-kube-api-access-qz8sp" (OuterVolumeSpecName: "kube-api-access-qz8sp") pod "2f336c66-c9c1-4764-8f55-a6fd70f01790" (UID: "2f336c66-c9c1-4764-8f55-a6fd70f01790"). InnerVolumeSpecName "kube-api-access-qz8sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.521489 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f336c66-c9c1-4764-8f55-a6fd70f01790-config-data" (OuterVolumeSpecName: "config-data") pod "2f336c66-c9c1-4764-8f55-a6fd70f01790" (UID: "2f336c66-c9c1-4764-8f55-a6fd70f01790"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.522641 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f336c66-c9c1-4764-8f55-a6fd70f01790-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f336c66-c9c1-4764-8f55-a6fd70f01790" (UID: "2f336c66-c9c1-4764-8f55-a6fd70f01790"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.541545 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f336c66-c9c1-4764-8f55-a6fd70f01790-scripts" (OuterVolumeSpecName: "scripts") pod "2f336c66-c9c1-4764-8f55-a6fd70f01790" (UID: "2f336c66-c9c1-4764-8f55-a6fd70f01790"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.561537 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f336c66-c9c1-4764-8f55-a6fd70f01790-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "2f336c66-c9c1-4764-8f55-a6fd70f01790" (UID: "2f336c66-c9c1-4764-8f55-a6fd70f01790"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.573445 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.595036 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f336c66-c9c1-4764-8f55-a6fd70f01790-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.595063 4844 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f336c66-c9c1-4764-8f55-a6fd70f01790-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.595073 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f336c66-c9c1-4764-8f55-a6fd70f01790-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.595081 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f336c66-c9c1-4764-8f55-a6fd70f01790-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.595091 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz8sp\" (UniqueName: \"kubernetes.io/projected/2f336c66-c9c1-4764-8f55-a6fd70f01790-kube-api-access-qz8sp\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.595102 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f336c66-c9c1-4764-8f55-a6fd70f01790-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.595111 4844 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f336c66-c9c1-4764-8f55-a6fd70f01790-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.696060 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8576337-1537-4b93-8d68-829d6bdb8a44-logs\") pod \"f8576337-1537-4b93-8d68-829d6bdb8a44\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.696100 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8576337-1537-4b93-8d68-829d6bdb8a44-httpd-run\") pod \"f8576337-1537-4b93-8d68-829d6bdb8a44\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.696168 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-public-tls-certs\") pod \"f8576337-1537-4b93-8d68-829d6bdb8a44\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.696526 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8576337-1537-4b93-8d68-829d6bdb8a44-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f8576337-1537-4b93-8d68-829d6bdb8a44" (UID: "f8576337-1537-4b93-8d68-829d6bdb8a44"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.696642 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8576337-1537-4b93-8d68-829d6bdb8a44-logs" (OuterVolumeSpecName: "logs") pod "f8576337-1537-4b93-8d68-829d6bdb8a44" (UID: "f8576337-1537-4b93-8d68-829d6bdb8a44"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.696589 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-combined-ca-bundle\") pod \"f8576337-1537-4b93-8d68-829d6bdb8a44\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.696891 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-config-data\") pod \"f8576337-1537-4b93-8d68-829d6bdb8a44\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.696924 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5ftl\" (UniqueName: \"kubernetes.io/projected/f8576337-1537-4b93-8d68-829d6bdb8a44-kube-api-access-v5ftl\") pod \"f8576337-1537-4b93-8d68-829d6bdb8a44\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.696957 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-scripts\") pod \"f8576337-1537-4b93-8d68-829d6bdb8a44\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.696977 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"f8576337-1537-4b93-8d68-829d6bdb8a44\" (UID: \"f8576337-1537-4b93-8d68-829d6bdb8a44\") " Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.697417 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8576337-1537-4b93-8d68-829d6bdb8a44-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.697432 4844 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8576337-1537-4b93-8d68-829d6bdb8a44-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.724467 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-scripts" (OuterVolumeSpecName: "scripts") pod "f8576337-1537-4b93-8d68-829d6bdb8a44" (UID: "f8576337-1537-4b93-8d68-829d6bdb8a44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.724751 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "f8576337-1537-4b93-8d68-829d6bdb8a44" (UID: "f8576337-1537-4b93-8d68-829d6bdb8a44"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.727795 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8576337-1537-4b93-8d68-829d6bdb8a44-kube-api-access-v5ftl" (OuterVolumeSpecName: "kube-api-access-v5ftl") pod "f8576337-1537-4b93-8d68-829d6bdb8a44" (UID: "f8576337-1537-4b93-8d68-829d6bdb8a44"). InnerVolumeSpecName "kube-api-access-v5ftl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.798781 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5ftl\" (UniqueName: \"kubernetes.io/projected/f8576337-1537-4b93-8d68-829d6bdb8a44-kube-api-access-v5ftl\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.798816 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.798836 4844 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.818486 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8576337-1537-4b93-8d68-829d6bdb8a44" (UID: "f8576337-1537-4b93-8d68-829d6bdb8a44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.854449 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-config-data" (OuterVolumeSpecName: "config-data") pod "f8576337-1537-4b93-8d68-829d6bdb8a44" (UID: "f8576337-1537-4b93-8d68-829d6bdb8a44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.856169 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f8576337-1537-4b93-8d68-829d6bdb8a44" (UID: "f8576337-1537-4b93-8d68-829d6bdb8a44"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.856410 4844 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.901147 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.901246 4844 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.901315 4844 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:33 crc kubenswrapper[4844]: I0126 13:20:33.901374 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8576337-1537-4b93-8d68-829d6bdb8a44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.270045 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f8576337-1537-4b93-8d68-829d6bdb8a44","Type":"ContainerDied","Data":"3f22adfdf18e56267abb2f14d250290a91d23159c76e670a12967cfa0fdb0496"} Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.270359 4844 scope.go:117] "RemoveContainer" containerID="76d186df983a4311c39bb131d05901edbae1caed5ec77d7ca63e96878e3cee73" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.270487 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.296287 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f984df9c6-m8lct" event={"ID":"2f336c66-c9c1-4764-8f55-a6fd70f01790","Type":"ContainerDied","Data":"8cbeeabeda98d6efd19df33bdbcb67b60c23ab160c94c8324901cc866386fc92"} Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.296315 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f984df9c6-m8lct" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.313267 4844 scope.go:117] "RemoveContainer" containerID="f146d5e904901fde282f115e9db4329dac1a4682ce347aee3a96bc73d6a73f52" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.324911 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.340109 4844 scope.go:117] "RemoveContainer" containerID="c6ebce027282a49648d65f221d8df430e516930ebe722a6821d99749d3838a00" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.355805 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.371922 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f984df9c6-m8lct"] Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.382737 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f984df9c6-m8lct"] Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.395879 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:20:34 crc kubenswrapper[4844]: E0126 13:20:34.396350 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8576337-1537-4b93-8d68-829d6bdb8a44" containerName="glance-httpd" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.396365 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8576337-1537-4b93-8d68-829d6bdb8a44" containerName="glance-httpd" Jan 26 13:20:34 crc kubenswrapper[4844]: E0126 13:20:34.396381 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8576337-1537-4b93-8d68-829d6bdb8a44" containerName="glance-log" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.396388 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8576337-1537-4b93-8d68-829d6bdb8a44" containerName="glance-log" Jan 26 13:20:34 crc kubenswrapper[4844]: E0126 13:20:34.396418 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f336c66-c9c1-4764-8f55-a6fd70f01790" containerName="horizon" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.396425 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f336c66-c9c1-4764-8f55-a6fd70f01790" containerName="horizon" Jan 26 13:20:34 crc kubenswrapper[4844]: E0126 13:20:34.396438 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f336c66-c9c1-4764-8f55-a6fd70f01790" containerName="horizon-log" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.396443 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f336c66-c9c1-4764-8f55-a6fd70f01790" containerName="horizon-log" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.396625 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f336c66-c9c1-4764-8f55-a6fd70f01790" containerName="horizon-log" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.396646 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f336c66-c9c1-4764-8f55-a6fd70f01790" containerName="horizon" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.396658 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8576337-1537-4b93-8d68-829d6bdb8a44" containerName="glance-httpd" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.396668 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8576337-1537-4b93-8d68-829d6bdb8a44" containerName="glance-log" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.397994 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.400839 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.404630 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.404699 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.524702 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65fceb02-1fd4-4b60-a767-f2d232539d43-config-data\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.524851 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65fceb02-1fd4-4b60-a767-f2d232539d43-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.524892 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.525001 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dr8n\" (UniqueName: \"kubernetes.io/projected/65fceb02-1fd4-4b60-a767-f2d232539d43-kube-api-access-9dr8n\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.525297 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65fceb02-1fd4-4b60-a767-f2d232539d43-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.525461 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65fceb02-1fd4-4b60-a767-f2d232539d43-scripts\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.525540 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65fceb02-1fd4-4b60-a767-f2d232539d43-logs\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.525638 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65fceb02-1fd4-4b60-a767-f2d232539d43-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.564543 4844 scope.go:117] "RemoveContainer" containerID="b4a28fc027238c2c642ef160a8fb190c22d5b2b5a5c62897b96d66146b947b9e" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.627469 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65fceb02-1fd4-4b60-a767-f2d232539d43-config-data\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.627779 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65fceb02-1fd4-4b60-a767-f2d232539d43-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.627900 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.628003 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dr8n\" (UniqueName: \"kubernetes.io/projected/65fceb02-1fd4-4b60-a767-f2d232539d43-kube-api-access-9dr8n\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.628103 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65fceb02-1fd4-4b60-a767-f2d232539d43-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.628236 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65fceb02-1fd4-4b60-a767-f2d232539d43-scripts\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.628341 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65fceb02-1fd4-4b60-a767-f2d232539d43-logs\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.628444 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65fceb02-1fd4-4b60-a767-f2d232539d43-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.629041 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65fceb02-1fd4-4b60-a767-f2d232539d43-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.629213 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65fceb02-1fd4-4b60-a767-f2d232539d43-logs\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.629443 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.635011 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65fceb02-1fd4-4b60-a767-f2d232539d43-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.635545 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65fceb02-1fd4-4b60-a767-f2d232539d43-scripts\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.644276 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65fceb02-1fd4-4b60-a767-f2d232539d43-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.647915 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dr8n\" (UniqueName: \"kubernetes.io/projected/65fceb02-1fd4-4b60-a767-f2d232539d43-kube-api-access-9dr8n\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.656056 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65fceb02-1fd4-4b60-a767-f2d232539d43-config-data\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.666240 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"65fceb02-1fd4-4b60-a767-f2d232539d43\") " pod="openstack/glance-default-external-api-0" Jan 26 13:20:34 crc kubenswrapper[4844]: I0126 13:20:34.724778 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.273642 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 13:20:35 crc kubenswrapper[4844]: W0126 13:20:35.284028 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65fceb02_1fd4_4b60_a767_f2d232539d43.slice/crio-b5b63faf1fa72b39febee28af3d77bfd91c95d7a2ecc4b9148cb855cf5f67a92 WatchSource:0}: Error finding container b5b63faf1fa72b39febee28af3d77bfd91c95d7a2ecc4b9148cb855cf5f67a92: Status 404 returned error can't find the container with id b5b63faf1fa72b39febee28af3d77bfd91c95d7a2ecc4b9148cb855cf5f67a92 Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.307717 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"65fceb02-1fd4-4b60-a767-f2d232539d43","Type":"ContainerStarted","Data":"b5b63faf1fa72b39febee28af3d77bfd91c95d7a2ecc4b9148cb855cf5f67a92"} Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.327117 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f336c66-c9c1-4764-8f55-a6fd70f01790" path="/var/lib/kubelet/pods/2f336c66-c9c1-4764-8f55-a6fd70f01790/volumes" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.328848 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8576337-1537-4b93-8d68-829d6bdb8a44" path="/var/lib/kubelet/pods/f8576337-1537-4b93-8d68-829d6bdb8a44/volumes" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.796747 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.860712 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/388147f6-5b13-4111-9d1f-fe317038852d-run-httpd\") pod \"388147f6-5b13-4111-9d1f-fe317038852d\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.860776 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-combined-ca-bundle\") pod \"388147f6-5b13-4111-9d1f-fe317038852d\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.860830 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-config-data\") pod \"388147f6-5b13-4111-9d1f-fe317038852d\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.860920 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-scripts\") pod \"388147f6-5b13-4111-9d1f-fe317038852d\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.860985 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/388147f6-5b13-4111-9d1f-fe317038852d-log-httpd\") pod \"388147f6-5b13-4111-9d1f-fe317038852d\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.861028 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6st5\" (UniqueName: \"kubernetes.io/projected/388147f6-5b13-4111-9d1f-fe317038852d-kube-api-access-t6st5\") pod \"388147f6-5b13-4111-9d1f-fe317038852d\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.861161 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-sg-core-conf-yaml\") pod \"388147f6-5b13-4111-9d1f-fe317038852d\" (UID: \"388147f6-5b13-4111-9d1f-fe317038852d\") " Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.861724 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/388147f6-5b13-4111-9d1f-fe317038852d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "388147f6-5b13-4111-9d1f-fe317038852d" (UID: "388147f6-5b13-4111-9d1f-fe317038852d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.861852 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/388147f6-5b13-4111-9d1f-fe317038852d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "388147f6-5b13-4111-9d1f-fe317038852d" (UID: "388147f6-5b13-4111-9d1f-fe317038852d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.876821 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/388147f6-5b13-4111-9d1f-fe317038852d-kube-api-access-t6st5" (OuterVolumeSpecName: "kube-api-access-t6st5") pod "388147f6-5b13-4111-9d1f-fe317038852d" (UID: "388147f6-5b13-4111-9d1f-fe317038852d"). InnerVolumeSpecName "kube-api-access-t6st5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.878856 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-scripts" (OuterVolumeSpecName: "scripts") pod "388147f6-5b13-4111-9d1f-fe317038852d" (UID: "388147f6-5b13-4111-9d1f-fe317038852d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.903758 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-tmtvd"] Jan 26 13:20:35 crc kubenswrapper[4844]: E0126 13:20:35.904293 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="388147f6-5b13-4111-9d1f-fe317038852d" containerName="ceilometer-notification-agent" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.904310 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="388147f6-5b13-4111-9d1f-fe317038852d" containerName="ceilometer-notification-agent" Jan 26 13:20:35 crc kubenswrapper[4844]: E0126 13:20:35.904335 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="388147f6-5b13-4111-9d1f-fe317038852d" containerName="ceilometer-central-agent" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.904343 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="388147f6-5b13-4111-9d1f-fe317038852d" containerName="ceilometer-central-agent" Jan 26 13:20:35 crc kubenswrapper[4844]: E0126 13:20:35.904358 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="388147f6-5b13-4111-9d1f-fe317038852d" containerName="sg-core" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.904365 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="388147f6-5b13-4111-9d1f-fe317038852d" containerName="sg-core" Jan 26 13:20:35 crc kubenswrapper[4844]: E0126 13:20:35.904379 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="388147f6-5b13-4111-9d1f-fe317038852d" containerName="proxy-httpd" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.904388 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="388147f6-5b13-4111-9d1f-fe317038852d" containerName="proxy-httpd" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.904643 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="388147f6-5b13-4111-9d1f-fe317038852d" containerName="ceilometer-central-agent" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.904671 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="388147f6-5b13-4111-9d1f-fe317038852d" containerName="proxy-httpd" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.904681 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="388147f6-5b13-4111-9d1f-fe317038852d" containerName="sg-core" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.904694 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="388147f6-5b13-4111-9d1f-fe317038852d" containerName="ceilometer-notification-agent" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.917500 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tmtvd" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.928150 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "388147f6-5b13-4111-9d1f-fe317038852d" (UID: "388147f6-5b13-4111-9d1f-fe317038852d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.952491 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tmtvd"] Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.966428 4844 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.966458 4844 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/388147f6-5b13-4111-9d1f-fe317038852d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.966467 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.966476 4844 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/388147f6-5b13-4111-9d1f-fe317038852d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:35 crc kubenswrapper[4844]: I0126 13:20:35.966485 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6st5\" (UniqueName: \"kubernetes.io/projected/388147f6-5b13-4111-9d1f-fe317038852d-kube-api-access-t6st5\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.067775 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "388147f6-5b13-4111-9d1f-fe317038852d" (UID: "388147f6-5b13-4111-9d1f-fe317038852d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.069285 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/128a7603-8c83-4c8f-8484-031abaa6bc9a-operator-scripts\") pod \"nova-api-db-create-tmtvd\" (UID: \"128a7603-8c83-4c8f-8484-031abaa6bc9a\") " pod="openstack/nova-api-db-create-tmtvd" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.069360 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnd29\" (UniqueName: \"kubernetes.io/projected/128a7603-8c83-4c8f-8484-031abaa6bc9a-kube-api-access-cnd29\") pod \"nova-api-db-create-tmtvd\" (UID: \"128a7603-8c83-4c8f-8484-031abaa6bc9a\") " pod="openstack/nova-api-db-create-tmtvd" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.069446 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.073618 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-b7qvz"] Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.078976 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b7qvz" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.087672 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-config-data" (OuterVolumeSpecName: "config-data") pod "388147f6-5b13-4111-9d1f-fe317038852d" (UID: "388147f6-5b13-4111-9d1f-fe317038852d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.095142 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0030-account-create-update-7kfzr"] Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.096247 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0030-account-create-update-7kfzr" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.098960 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.120575 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-b7qvz"] Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.149748 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0030-account-create-update-7kfzr"] Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.174303 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th5q8\" (UniqueName: \"kubernetes.io/projected/f2f773df-1a60-4d98-aaf9-25edd517e2e7-kube-api-access-th5q8\") pod \"nova-api-0030-account-create-update-7kfzr\" (UID: \"f2f773df-1a60-4d98-aaf9-25edd517e2e7\") " pod="openstack/nova-api-0030-account-create-update-7kfzr" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.174360 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/128a7603-8c83-4c8f-8484-031abaa6bc9a-operator-scripts\") pod \"nova-api-db-create-tmtvd\" (UID: \"128a7603-8c83-4c8f-8484-031abaa6bc9a\") " pod="openstack/nova-api-db-create-tmtvd" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.174408 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnd29\" (UniqueName: \"kubernetes.io/projected/128a7603-8c83-4c8f-8484-031abaa6bc9a-kube-api-access-cnd29\") pod \"nova-api-db-create-tmtvd\" (UID: \"128a7603-8c83-4c8f-8484-031abaa6bc9a\") " pod="openstack/nova-api-db-create-tmtvd" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.174429 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w67z\" (UniqueName: \"kubernetes.io/projected/d3c75b85-b9e8-4d45-93de-018fa9e10eb8-kube-api-access-6w67z\") pod \"nova-cell0-db-create-b7qvz\" (UID: \"d3c75b85-b9e8-4d45-93de-018fa9e10eb8\") " pod="openstack/nova-cell0-db-create-b7qvz" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.174478 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2f773df-1a60-4d98-aaf9-25edd517e2e7-operator-scripts\") pod \"nova-api-0030-account-create-update-7kfzr\" (UID: \"f2f773df-1a60-4d98-aaf9-25edd517e2e7\") " pod="openstack/nova-api-0030-account-create-update-7kfzr" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.174496 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3c75b85-b9e8-4d45-93de-018fa9e10eb8-operator-scripts\") pod \"nova-cell0-db-create-b7qvz\" (UID: \"d3c75b85-b9e8-4d45-93de-018fa9e10eb8\") " pod="openstack/nova-cell0-db-create-b7qvz" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.174588 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/388147f6-5b13-4111-9d1f-fe317038852d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.176151 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/128a7603-8c83-4c8f-8484-031abaa6bc9a-operator-scripts\") pod \"nova-api-db-create-tmtvd\" (UID: \"128a7603-8c83-4c8f-8484-031abaa6bc9a\") " pod="openstack/nova-api-db-create-tmtvd" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.195673 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnd29\" (UniqueName: \"kubernetes.io/projected/128a7603-8c83-4c8f-8484-031abaa6bc9a-kube-api-access-cnd29\") pod \"nova-api-db-create-tmtvd\" (UID: \"128a7603-8c83-4c8f-8484-031abaa6bc9a\") " pod="openstack/nova-api-db-create-tmtvd" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.201698 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-xg8dt"] Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.202930 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xg8dt" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.209946 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xg8dt"] Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.285883 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkg2s\" (UniqueName: \"kubernetes.io/projected/e1e6f9c3-de48-4504-9b94-bbabcc87fc45-kube-api-access-bkg2s\") pod \"nova-cell1-db-create-xg8dt\" (UID: \"e1e6f9c3-de48-4504-9b94-bbabcc87fc45\") " pod="openstack/nova-cell1-db-create-xg8dt" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.286000 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1e6f9c3-de48-4504-9b94-bbabcc87fc45-operator-scripts\") pod \"nova-cell1-db-create-xg8dt\" (UID: \"e1e6f9c3-de48-4504-9b94-bbabcc87fc45\") " pod="openstack/nova-cell1-db-create-xg8dt" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.286073 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th5q8\" (UniqueName: \"kubernetes.io/projected/f2f773df-1a60-4d98-aaf9-25edd517e2e7-kube-api-access-th5q8\") pod \"nova-api-0030-account-create-update-7kfzr\" (UID: \"f2f773df-1a60-4d98-aaf9-25edd517e2e7\") " pod="openstack/nova-api-0030-account-create-update-7kfzr" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.286334 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w67z\" (UniqueName: \"kubernetes.io/projected/d3c75b85-b9e8-4d45-93de-018fa9e10eb8-kube-api-access-6w67z\") pod \"nova-cell0-db-create-b7qvz\" (UID: \"d3c75b85-b9e8-4d45-93de-018fa9e10eb8\") " pod="openstack/nova-cell0-db-create-b7qvz" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.286465 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2f773df-1a60-4d98-aaf9-25edd517e2e7-operator-scripts\") pod \"nova-api-0030-account-create-update-7kfzr\" (UID: \"f2f773df-1a60-4d98-aaf9-25edd517e2e7\") " pod="openstack/nova-api-0030-account-create-update-7kfzr" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.286503 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3c75b85-b9e8-4d45-93de-018fa9e10eb8-operator-scripts\") pod \"nova-cell0-db-create-b7qvz\" (UID: \"d3c75b85-b9e8-4d45-93de-018fa9e10eb8\") " pod="openstack/nova-cell0-db-create-b7qvz" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.287304 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3c75b85-b9e8-4d45-93de-018fa9e10eb8-operator-scripts\") pod \"nova-cell0-db-create-b7qvz\" (UID: \"d3c75b85-b9e8-4d45-93de-018fa9e10eb8\") " pod="openstack/nova-cell0-db-create-b7qvz" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.287556 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2f773df-1a60-4d98-aaf9-25edd517e2e7-operator-scripts\") pod \"nova-api-0030-account-create-update-7kfzr\" (UID: \"f2f773df-1a60-4d98-aaf9-25edd517e2e7\") " pod="openstack/nova-api-0030-account-create-update-7kfzr" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.288847 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-283b-account-create-update-9wvm2"] Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.291557 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-283b-account-create-update-9wvm2" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.294151 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.302305 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-283b-account-create-update-9wvm2"] Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.305896 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th5q8\" (UniqueName: \"kubernetes.io/projected/f2f773df-1a60-4d98-aaf9-25edd517e2e7-kube-api-access-th5q8\") pod \"nova-api-0030-account-create-update-7kfzr\" (UID: \"f2f773df-1a60-4d98-aaf9-25edd517e2e7\") " pod="openstack/nova-api-0030-account-create-update-7kfzr" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.310132 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w67z\" (UniqueName: \"kubernetes.io/projected/d3c75b85-b9e8-4d45-93de-018fa9e10eb8-kube-api-access-6w67z\") pod \"nova-cell0-db-create-b7qvz\" (UID: \"d3c75b85-b9e8-4d45-93de-018fa9e10eb8\") " pod="openstack/nova-cell0-db-create-b7qvz" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.330771 4844 generic.go:334] "Generic (PLEG): container finished" podID="388147f6-5b13-4111-9d1f-fe317038852d" containerID="622c5f1bda16149b59c0bc280898bd71a037aa92dba8fea1e55f57776f6eaa73" exitCode=0 Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.330830 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"388147f6-5b13-4111-9d1f-fe317038852d","Type":"ContainerDied","Data":"622c5f1bda16149b59c0bc280898bd71a037aa92dba8fea1e55f57776f6eaa73"} Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.330856 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"388147f6-5b13-4111-9d1f-fe317038852d","Type":"ContainerDied","Data":"ead8dd568acb56fbe1ac9a2fca2c811eb7df1382bd012a4c178aeb1a84b46908"} Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.330872 4844 scope.go:117] "RemoveContainer" containerID="652384a1a113107dec6c823ed50bdaaad3c621f614e3593b9879c6365df3e8c0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.331017 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.338325 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"65fceb02-1fd4-4b60-a767-f2d232539d43","Type":"ContainerStarted","Data":"f63d4f62ff0de951c58fadd848faba46bff3101d0bf6bcf2e2d52cb13b225d18"} Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.364967 4844 generic.go:334] "Generic (PLEG): container finished" podID="d2ba6a95-767f-4589-8dc9-e124e9be4fb4" containerID="e403c6da4f46560f63044b5094a09e99cdbaff09ff677f8628b111b283d9b670" exitCode=137 Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.365018 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d2ba6a95-767f-4589-8dc9-e124e9be4fb4","Type":"ContainerDied","Data":"e403c6da4f46560f63044b5094a09e99cdbaff09ff677f8628b111b283d9b670"} Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.367510 4844 scope.go:117] "RemoveContainer" containerID="c63f9648a87ce352c12d0c8a5c8ab3586be5e8ccaa9d12b3be1eb58e72199be6" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.368116 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.368156 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.378863 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.388659 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.391994 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ab14b0-33a9-4364-a552-16b57b9826c5-operator-scripts\") pod \"nova-cell0-283b-account-create-update-9wvm2\" (UID: \"45ab14b0-33a9-4364-a552-16b57b9826c5\") " pod="openstack/nova-cell0-283b-account-create-update-9wvm2" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.392740 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn9c4\" (UniqueName: \"kubernetes.io/projected/45ab14b0-33a9-4364-a552-16b57b9826c5-kube-api-access-mn9c4\") pod \"nova-cell0-283b-account-create-update-9wvm2\" (UID: \"45ab14b0-33a9-4364-a552-16b57b9826c5\") " pod="openstack/nova-cell0-283b-account-create-update-9wvm2" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.393008 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkg2s\" (UniqueName: \"kubernetes.io/projected/e1e6f9c3-de48-4504-9b94-bbabcc87fc45-kube-api-access-bkg2s\") pod \"nova-cell1-db-create-xg8dt\" (UID: \"e1e6f9c3-de48-4504-9b94-bbabcc87fc45\") " pod="openstack/nova-cell1-db-create-xg8dt" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.393051 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1e6f9c3-de48-4504-9b94-bbabcc87fc45-operator-scripts\") pod \"nova-cell1-db-create-xg8dt\" (UID: \"e1e6f9c3-de48-4504-9b94-bbabcc87fc45\") " pod="openstack/nova-cell1-db-create-xg8dt" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.396908 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1e6f9c3-de48-4504-9b94-bbabcc87fc45-operator-scripts\") pod \"nova-cell1-db-create-xg8dt\" (UID: \"e1e6f9c3-de48-4504-9b94-bbabcc87fc45\") " pod="openstack/nova-cell1-db-create-xg8dt" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.414002 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkg2s\" (UniqueName: \"kubernetes.io/projected/e1e6f9c3-de48-4504-9b94-bbabcc87fc45-kube-api-access-bkg2s\") pod \"nova-cell1-db-create-xg8dt\" (UID: \"e1e6f9c3-de48-4504-9b94-bbabcc87fc45\") " pod="openstack/nova-cell1-db-create-xg8dt" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.418529 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.431211 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tmtvd" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.442161 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.445648 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.450168 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.480162 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.487221 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b7qvz" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.497118 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0030-account-create-update-7kfzr" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.502725 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn9c4\" (UniqueName: \"kubernetes.io/projected/45ab14b0-33a9-4364-a552-16b57b9826c5-kube-api-access-mn9c4\") pod \"nova-cell0-283b-account-create-update-9wvm2\" (UID: \"45ab14b0-33a9-4364-a552-16b57b9826c5\") " pod="openstack/nova-cell0-283b-account-create-update-9wvm2" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.503017 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ab14b0-33a9-4364-a552-16b57b9826c5-operator-scripts\") pod \"nova-cell0-283b-account-create-update-9wvm2\" (UID: \"45ab14b0-33a9-4364-a552-16b57b9826c5\") " pod="openstack/nova-cell0-283b-account-create-update-9wvm2" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.503244 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-d54f-account-create-update-vkjxw"] Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.505539 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d54f-account-create-update-vkjxw" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.509976 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ab14b0-33a9-4364-a552-16b57b9826c5-operator-scripts\") pod \"nova-cell0-283b-account-create-update-9wvm2\" (UID: \"45ab14b0-33a9-4364-a552-16b57b9826c5\") " pod="openstack/nova-cell0-283b-account-create-update-9wvm2" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.518109 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.522896 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xg8dt" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.523701 4844 scope.go:117] "RemoveContainer" containerID="622c5f1bda16149b59c0bc280898bd71a037aa92dba8fea1e55f57776f6eaa73" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.524172 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn9c4\" (UniqueName: \"kubernetes.io/projected/45ab14b0-33a9-4364-a552-16b57b9826c5-kube-api-access-mn9c4\") pod \"nova-cell0-283b-account-create-update-9wvm2\" (UID: \"45ab14b0-33a9-4364-a552-16b57b9826c5\") " pod="openstack/nova-cell0-283b-account-create-update-9wvm2" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.562856 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d54f-account-create-update-vkjxw"] Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.582114 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.583087 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5d969b7b55-l9p8p" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.604310 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6trlj\" (UniqueName: \"kubernetes.io/projected/80aa004c-98f5-4265-8321-daf6d8132c24-kube-api-access-6trlj\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.604369 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80aa004c-98f5-4265-8321-daf6d8132c24-run-httpd\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.604389 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80aa004c-98f5-4265-8321-daf6d8132c24-log-httpd\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.604411 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-config-data\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.604441 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-scripts\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.604469 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.604502 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.604523 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/350afd25-a535-4c5c-9b45-85b457255769-operator-scripts\") pod \"nova-cell1-d54f-account-create-update-vkjxw\" (UID: \"350afd25-a535-4c5c-9b45-85b457255769\") " pod="openstack/nova-cell1-d54f-account-create-update-vkjxw" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.604581 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4xqm\" (UniqueName: \"kubernetes.io/projected/350afd25-a535-4c5c-9b45-85b457255769-kube-api-access-j4xqm\") pod \"nova-cell1-d54f-account-create-update-vkjxw\" (UID: \"350afd25-a535-4c5c-9b45-85b457255769\") " pod="openstack/nova-cell1-d54f-account-create-update-vkjxw" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.615363 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-283b-account-create-update-9wvm2" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.706117 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6trlj\" (UniqueName: \"kubernetes.io/projected/80aa004c-98f5-4265-8321-daf6d8132c24-kube-api-access-6trlj\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.706202 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80aa004c-98f5-4265-8321-daf6d8132c24-run-httpd\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.706247 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80aa004c-98f5-4265-8321-daf6d8132c24-log-httpd\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.706276 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-config-data\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.706321 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-scripts\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.706363 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.706416 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.706445 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/350afd25-a535-4c5c-9b45-85b457255769-operator-scripts\") pod \"nova-cell1-d54f-account-create-update-vkjxw\" (UID: \"350afd25-a535-4c5c-9b45-85b457255769\") " pod="openstack/nova-cell1-d54f-account-create-update-vkjxw" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.706940 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80aa004c-98f5-4265-8321-daf6d8132c24-log-httpd\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.708438 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80aa004c-98f5-4265-8321-daf6d8132c24-run-httpd\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.708557 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4xqm\" (UniqueName: \"kubernetes.io/projected/350afd25-a535-4c5c-9b45-85b457255769-kube-api-access-j4xqm\") pod \"nova-cell1-d54f-account-create-update-vkjxw\" (UID: \"350afd25-a535-4c5c-9b45-85b457255769\") " pod="openstack/nova-cell1-d54f-account-create-update-vkjxw" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.709663 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/350afd25-a535-4c5c-9b45-85b457255769-operator-scripts\") pod \"nova-cell1-d54f-account-create-update-vkjxw\" (UID: \"350afd25-a535-4c5c-9b45-85b457255769\") " pod="openstack/nova-cell1-d54f-account-create-update-vkjxw" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.717586 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-scripts\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.718151 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-config-data\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.719400 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.723086 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.724860 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6trlj\" (UniqueName: \"kubernetes.io/projected/80aa004c-98f5-4265-8321-daf6d8132c24-kube-api-access-6trlj\") pod \"ceilometer-0\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.727839 4844 scope.go:117] "RemoveContainer" containerID="42495dda29d29e62f3e3e9573d76c490c019e49f761a8cb521a79411ec5a1ac3" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.733236 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4xqm\" (UniqueName: \"kubernetes.io/projected/350afd25-a535-4c5c-9b45-85b457255769-kube-api-access-j4xqm\") pod \"nova-cell1-d54f-account-create-update-vkjxw\" (UID: \"350afd25-a535-4c5c-9b45-85b457255769\") " pod="openstack/nova-cell1-d54f-account-create-update-vkjxw" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.765476 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.797789 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.843634 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d54f-account-create-update-vkjxw" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.886457 4844 scope.go:117] "RemoveContainer" containerID="652384a1a113107dec6c823ed50bdaaad3c621f614e3593b9879c6365df3e8c0" Jan 26 13:20:36 crc kubenswrapper[4844]: E0126 13:20:36.890862 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"652384a1a113107dec6c823ed50bdaaad3c621f614e3593b9879c6365df3e8c0\": container with ID starting with 652384a1a113107dec6c823ed50bdaaad3c621f614e3593b9879c6365df3e8c0 not found: ID does not exist" containerID="652384a1a113107dec6c823ed50bdaaad3c621f614e3593b9879c6365df3e8c0" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.890911 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"652384a1a113107dec6c823ed50bdaaad3c621f614e3593b9879c6365df3e8c0"} err="failed to get container status \"652384a1a113107dec6c823ed50bdaaad3c621f614e3593b9879c6365df3e8c0\": rpc error: code = NotFound desc = could not find container \"652384a1a113107dec6c823ed50bdaaad3c621f614e3593b9879c6365df3e8c0\": container with ID starting with 652384a1a113107dec6c823ed50bdaaad3c621f614e3593b9879c6365df3e8c0 not found: ID does not exist" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.890943 4844 scope.go:117] "RemoveContainer" containerID="c63f9648a87ce352c12d0c8a5c8ab3586be5e8ccaa9d12b3be1eb58e72199be6" Jan 26 13:20:36 crc kubenswrapper[4844]: E0126 13:20:36.891526 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c63f9648a87ce352c12d0c8a5c8ab3586be5e8ccaa9d12b3be1eb58e72199be6\": container with ID starting with c63f9648a87ce352c12d0c8a5c8ab3586be5e8ccaa9d12b3be1eb58e72199be6 not found: ID does not exist" containerID="c63f9648a87ce352c12d0c8a5c8ab3586be5e8ccaa9d12b3be1eb58e72199be6" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.891554 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c63f9648a87ce352c12d0c8a5c8ab3586be5e8ccaa9d12b3be1eb58e72199be6"} err="failed to get container status \"c63f9648a87ce352c12d0c8a5c8ab3586be5e8ccaa9d12b3be1eb58e72199be6\": rpc error: code = NotFound desc = could not find container \"c63f9648a87ce352c12d0c8a5c8ab3586be5e8ccaa9d12b3be1eb58e72199be6\": container with ID starting with c63f9648a87ce352c12d0c8a5c8ab3586be5e8ccaa9d12b3be1eb58e72199be6 not found: ID does not exist" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.891588 4844 scope.go:117] "RemoveContainer" containerID="622c5f1bda16149b59c0bc280898bd71a037aa92dba8fea1e55f57776f6eaa73" Jan 26 13:20:36 crc kubenswrapper[4844]: E0126 13:20:36.893216 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"622c5f1bda16149b59c0bc280898bd71a037aa92dba8fea1e55f57776f6eaa73\": container with ID starting with 622c5f1bda16149b59c0bc280898bd71a037aa92dba8fea1e55f57776f6eaa73 not found: ID does not exist" containerID="622c5f1bda16149b59c0bc280898bd71a037aa92dba8fea1e55f57776f6eaa73" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.893255 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"622c5f1bda16149b59c0bc280898bd71a037aa92dba8fea1e55f57776f6eaa73"} err="failed to get container status \"622c5f1bda16149b59c0bc280898bd71a037aa92dba8fea1e55f57776f6eaa73\": rpc error: code = NotFound desc = could not find container \"622c5f1bda16149b59c0bc280898bd71a037aa92dba8fea1e55f57776f6eaa73\": container with ID starting with 622c5f1bda16149b59c0bc280898bd71a037aa92dba8fea1e55f57776f6eaa73 not found: ID does not exist" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.893280 4844 scope.go:117] "RemoveContainer" containerID="42495dda29d29e62f3e3e9573d76c490c019e49f761a8cb521a79411ec5a1ac3" Jan 26 13:20:36 crc kubenswrapper[4844]: E0126 13:20:36.893715 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42495dda29d29e62f3e3e9573d76c490c019e49f761a8cb521a79411ec5a1ac3\": container with ID starting with 42495dda29d29e62f3e3e9573d76c490c019e49f761a8cb521a79411ec5a1ac3 not found: ID does not exist" containerID="42495dda29d29e62f3e3e9573d76c490c019e49f761a8cb521a79411ec5a1ac3" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.893891 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42495dda29d29e62f3e3e9573d76c490c019e49f761a8cb521a79411ec5a1ac3"} err="failed to get container status \"42495dda29d29e62f3e3e9573d76c490c019e49f761a8cb521a79411ec5a1ac3\": rpc error: code = NotFound desc = could not find container \"42495dda29d29e62f3e3e9573d76c490c019e49f761a8cb521a79411ec5a1ac3\": container with ID starting with 42495dda29d29e62f3e3e9573d76c490c019e49f761a8cb521a79411ec5a1ac3 not found: ID does not exist" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.911125 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-logs\") pod \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.911201 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-config-data-custom\") pod \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.911218 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-config-data\") pod \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.911271 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-scripts\") pod \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.911382 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-etc-machine-id\") pod \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.911488 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcjns\" (UniqueName: \"kubernetes.io/projected/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-kube-api-access-xcjns\") pod \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.911636 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d2ba6a95-767f-4589-8dc9-e124e9be4fb4" (UID: "d2ba6a95-767f-4589-8dc9-e124e9be4fb4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.911547 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-combined-ca-bundle\") pod \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\" (UID: \"d2ba6a95-767f-4589-8dc9-e124e9be4fb4\") " Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.912549 4844 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.912964 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-logs" (OuterVolumeSpecName: "logs") pod "d2ba6a95-767f-4589-8dc9-e124e9be4fb4" (UID: "d2ba6a95-767f-4589-8dc9-e124e9be4fb4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.930438 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d2ba6a95-767f-4589-8dc9-e124e9be4fb4" (UID: "d2ba6a95-767f-4589-8dc9-e124e9be4fb4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.932173 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-scripts" (OuterVolumeSpecName: "scripts") pod "d2ba6a95-767f-4589-8dc9-e124e9be4fb4" (UID: "d2ba6a95-767f-4589-8dc9-e124e9be4fb4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.932217 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-kube-api-access-xcjns" (OuterVolumeSpecName: "kube-api-access-xcjns") pod "d2ba6a95-767f-4589-8dc9-e124e9be4fb4" (UID: "d2ba6a95-767f-4589-8dc9-e124e9be4fb4"). InnerVolumeSpecName "kube-api-access-xcjns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:36 crc kubenswrapper[4844]: I0126 13:20:36.986560 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2ba6a95-767f-4589-8dc9-e124e9be4fb4" (UID: "d2ba6a95-767f-4589-8dc9-e124e9be4fb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.017274 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcjns\" (UniqueName: \"kubernetes.io/projected/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-kube-api-access-xcjns\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.017313 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.017328 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.017338 4844 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.017348 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.054476 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-config-data" (OuterVolumeSpecName: "config-data") pod "d2ba6a95-767f-4589-8dc9-e124e9be4fb4" (UID: "d2ba6a95-767f-4589-8dc9-e124e9be4fb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.122947 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ba6a95-767f-4589-8dc9-e124e9be4fb4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.346569 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="388147f6-5b13-4111-9d1f-fe317038852d" path="/var/lib/kubelet/pods/388147f6-5b13-4111-9d1f-fe317038852d/volumes" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.391958 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0030-account-create-update-7kfzr"] Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.393034 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.392631 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d2ba6a95-767f-4589-8dc9-e124e9be4fb4","Type":"ContainerDied","Data":"90fdc14f94b3ca76fec2faca7e2ed23b2a7ce47c6c2d8e140256ea69e8892a5b"} Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.394251 4844 scope.go:117] "RemoveContainer" containerID="e403c6da4f46560f63044b5094a09e99cdbaff09ff677f8628b111b283d9b670" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.411999 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tmtvd"] Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.429721 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5fcff84d65-flkjh" Jan 26 13:20:37 crc kubenswrapper[4844]: W0126 13:20:37.447033 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2f773df_1a60_4d98_aaf9_25edd517e2e7.slice/crio-87046097fc14b75aa70fab1b430c62b9d4855acda0d46d65aa82f7b096efd057 WatchSource:0}: Error finding container 87046097fc14b75aa70fab1b430c62b9d4855acda0d46d65aa82f7b096efd057: Status 404 returned error can't find the container with id 87046097fc14b75aa70fab1b430c62b9d4855acda0d46d65aa82f7b096efd057 Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.456983 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dhzj8"] Jan 26 13:20:37 crc kubenswrapper[4844]: E0126 13:20:37.457400 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ba6a95-767f-4589-8dc9-e124e9be4fb4" containerName="cinder-api-log" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.457412 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ba6a95-767f-4589-8dc9-e124e9be4fb4" containerName="cinder-api-log" Jan 26 13:20:37 crc kubenswrapper[4844]: E0126 13:20:37.457437 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ba6a95-767f-4589-8dc9-e124e9be4fb4" containerName="cinder-api" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.457443 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ba6a95-767f-4589-8dc9-e124e9be4fb4" containerName="cinder-api" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.457643 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2ba6a95-767f-4589-8dc9-e124e9be4fb4" containerName="cinder-api" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.457666 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2ba6a95-767f-4589-8dc9-e124e9be4fb4" containerName="cinder-api-log" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.459028 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhzj8" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.492845 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dhzj8"] Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.503258 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.503238997 podStartE2EDuration="3.503238997s" podCreationTimestamp="2026-01-26 13:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:20:37.446586457 +0000 UTC m=+2214.379954069" watchObservedRunningTime="2026-01-26 13:20:37.503238997 +0000 UTC m=+2214.436606619" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.535616 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac79d59-b04a-45d5-baa7-8370e8c54045-catalog-content\") pod \"redhat-operators-dhzj8\" (UID: \"2ac79d59-b04a-45d5-baa7-8370e8c54045\") " pod="openshift-marketplace/redhat-operators-dhzj8" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.539490 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j6vf\" (UniqueName: \"kubernetes.io/projected/2ac79d59-b04a-45d5-baa7-8370e8c54045-kube-api-access-4j6vf\") pod \"redhat-operators-dhzj8\" (UID: \"2ac79d59-b04a-45d5-baa7-8370e8c54045\") " pod="openshift-marketplace/redhat-operators-dhzj8" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.543231 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac79d59-b04a-45d5-baa7-8370e8c54045-utilities\") pod \"redhat-operators-dhzj8\" (UID: \"2ac79d59-b04a-45d5-baa7-8370e8c54045\") " pod="openshift-marketplace/redhat-operators-dhzj8" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.599161 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bb4bbcbbd-hnxlf"] Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.599423 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-bb4bbcbbd-hnxlf" podUID="013c2624-05ec-49ef-85e2-5f5e155ee687" containerName="neutron-api" containerID="cri-o://1fe654e73b7eef70afb3917f09a7f33adb2e4eb9d2acce93cdafb3fd839abab1" gracePeriod=30 Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.599880 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-bb4bbcbbd-hnxlf" podUID="013c2624-05ec-49ef-85e2-5f5e155ee687" containerName="neutron-httpd" containerID="cri-o://a52d32b289a4c0e01a4126b1de9ef0e5e6fb384989b7d29538c65dbf1bb6f966" gracePeriod=30 Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.620758 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.625489 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.634780 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.636408 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.636807 4844 scope.go:117] "RemoveContainer" containerID="3f9a2b8bf982ab015fb60f7c7f785bf62b1cca0b666990a9b68377f548735595" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.639161 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.640205 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.640653 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.641904 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.645455 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac79d59-b04a-45d5-baa7-8370e8c54045-catalog-content\") pod \"redhat-operators-dhzj8\" (UID: \"2ac79d59-b04a-45d5-baa7-8370e8c54045\") " pod="openshift-marketplace/redhat-operators-dhzj8" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.645633 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j6vf\" (UniqueName: \"kubernetes.io/projected/2ac79d59-b04a-45d5-baa7-8370e8c54045-kube-api-access-4j6vf\") pod \"redhat-operators-dhzj8\" (UID: \"2ac79d59-b04a-45d5-baa7-8370e8c54045\") " pod="openshift-marketplace/redhat-operators-dhzj8" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.645663 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac79d59-b04a-45d5-baa7-8370e8c54045-utilities\") pod \"redhat-operators-dhzj8\" (UID: \"2ac79d59-b04a-45d5-baa7-8370e8c54045\") " pod="openshift-marketplace/redhat-operators-dhzj8" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.646717 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac79d59-b04a-45d5-baa7-8370e8c54045-utilities\") pod \"redhat-operators-dhzj8\" (UID: \"2ac79d59-b04a-45d5-baa7-8370e8c54045\") " pod="openshift-marketplace/redhat-operators-dhzj8" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.647104 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac79d59-b04a-45d5-baa7-8370e8c54045-catalog-content\") pod \"redhat-operators-dhzj8\" (UID: \"2ac79d59-b04a-45d5-baa7-8370e8c54045\") " pod="openshift-marketplace/redhat-operators-dhzj8" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.651395 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.661142 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-283b-account-create-update-9wvm2"] Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.669457 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-b7qvz"] Jan 26 13:20:37 crc kubenswrapper[4844]: W0126 13:20:37.672653 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45ab14b0_33a9_4364_a552_16b57b9826c5.slice/crio-88ce8259bb58a83a8ad667729b9229f110a184e4e756584eb67dd53fa867c6f9 WatchSource:0}: Error finding container 88ce8259bb58a83a8ad667729b9229f110a184e4e756584eb67dd53fa867c6f9: Status 404 returned error can't find the container with id 88ce8259bb58a83a8ad667729b9229f110a184e4e756584eb67dd53fa867c6f9 Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.677026 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xg8dt"] Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.678235 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j6vf\" (UniqueName: \"kubernetes.io/projected/2ac79d59-b04a-45d5-baa7-8370e8c54045-kube-api-access-4j6vf\") pod \"redhat-operators-dhzj8\" (UID: \"2ac79d59-b04a-45d5-baa7-8370e8c54045\") " pod="openshift-marketplace/redhat-operators-dhzj8" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.747924 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a34d9864-c377-4ca1-a4fe-512bf9292130-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.747971 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqx95\" (UniqueName: \"kubernetes.io/projected/a34d9864-c377-4ca1-a4fe-512bf9292130-kube-api-access-rqx95\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.747996 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a34d9864-c377-4ca1-a4fe-512bf9292130-scripts\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.748017 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a34d9864-c377-4ca1-a4fe-512bf9292130-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.748108 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a34d9864-c377-4ca1-a4fe-512bf9292130-config-data\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.748132 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a34d9864-c377-4ca1-a4fe-512bf9292130-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.748491 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a34d9864-c377-4ca1-a4fe-512bf9292130-logs\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.748715 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a34d9864-c377-4ca1-a4fe-512bf9292130-config-data-custom\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.748757 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a34d9864-c377-4ca1-a4fe-512bf9292130-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.853742 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a34d9864-c377-4ca1-a4fe-512bf9292130-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.853784 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqx95\" (UniqueName: \"kubernetes.io/projected/a34d9864-c377-4ca1-a4fe-512bf9292130-kube-api-access-rqx95\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.853806 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a34d9864-c377-4ca1-a4fe-512bf9292130-scripts\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.853822 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a34d9864-c377-4ca1-a4fe-512bf9292130-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.853880 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a34d9864-c377-4ca1-a4fe-512bf9292130-config-data\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.853896 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a34d9864-c377-4ca1-a4fe-512bf9292130-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.853911 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a34d9864-c377-4ca1-a4fe-512bf9292130-logs\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.853988 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a34d9864-c377-4ca1-a4fe-512bf9292130-config-data-custom\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.854014 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a34d9864-c377-4ca1-a4fe-512bf9292130-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.861257 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a34d9864-c377-4ca1-a4fe-512bf9292130-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.861510 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a34d9864-c377-4ca1-a4fe-512bf9292130-logs\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.862974 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a34d9864-c377-4ca1-a4fe-512bf9292130-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.868168 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a34d9864-c377-4ca1-a4fe-512bf9292130-config-data\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.872456 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a34d9864-c377-4ca1-a4fe-512bf9292130-config-data-custom\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.872941 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhzj8" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.873811 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a34d9864-c377-4ca1-a4fe-512bf9292130-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.874142 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a34d9864-c377-4ca1-a4fe-512bf9292130-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.876206 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a34d9864-c377-4ca1-a4fe-512bf9292130-scripts\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.879948 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d54f-account-create-update-vkjxw"] Jan 26 13:20:37 crc kubenswrapper[4844]: I0126 13:20:37.885269 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqx95\" (UniqueName: \"kubernetes.io/projected/a34d9864-c377-4ca1-a4fe-512bf9292130-kube-api-access-rqx95\") pod \"cinder-api-0\" (UID: \"a34d9864-c377-4ca1-a4fe-512bf9292130\") " pod="openstack/cinder-api-0" Jan 26 13:20:37 crc kubenswrapper[4844]: E0126 13:20:37.982539 4844 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef403703_395e_4db1_a9f5_a8e011e39ff2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod013c2624_05ec_49ef_85e2_5f5e155ee687.slice/crio-a52d32b289a4c0e01a4126b1de9ef0e5e6fb384989b7d29538c65dbf1bb6f966.scope\": RecentStats: unable to find data in memory cache]" Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.033058 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.445156 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dhzj8"] Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.455760 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-b7qvz" event={"ID":"d3c75b85-b9e8-4d45-93de-018fa9e10eb8","Type":"ContainerStarted","Data":"98d82500f5f5d35cb1e33d72ae4c377d3c893c4d283294b99b52ff4293ba1253"} Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.469339 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"65fceb02-1fd4-4b60-a767-f2d232539d43","Type":"ContainerStarted","Data":"930537477be7ed0db67fa08040586b418fc016931a1ca7cb0101169d2fd11432"} Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.472817 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0030-account-create-update-7kfzr" event={"ID":"f2f773df-1a60-4d98-aaf9-25edd517e2e7","Type":"ContainerStarted","Data":"e0262519b155b73755b64de131f5e0324b481c529587ff763040d7d536c1b239"} Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.472851 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0030-account-create-update-7kfzr" event={"ID":"f2f773df-1a60-4d98-aaf9-25edd517e2e7","Type":"ContainerStarted","Data":"87046097fc14b75aa70fab1b430c62b9d4855acda0d46d65aa82f7b096efd057"} Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.476121 4844 generic.go:334] "Generic (PLEG): container finished" podID="128a7603-8c83-4c8f-8484-031abaa6bc9a" containerID="980d12d51bd9c2c7f0ccf62a8c48bfc35a9dd560ca475a82fbf79ddc4c794690" exitCode=0 Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.476189 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tmtvd" event={"ID":"128a7603-8c83-4c8f-8484-031abaa6bc9a","Type":"ContainerDied","Data":"980d12d51bd9c2c7f0ccf62a8c48bfc35a9dd560ca475a82fbf79ddc4c794690"} Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.476230 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tmtvd" event={"ID":"128a7603-8c83-4c8f-8484-031abaa6bc9a","Type":"ContainerStarted","Data":"1b928a0116e4079ddab5b5ee1655a779b67a6d9ecf279977d0eb43a1f8fd68a6"} Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.478908 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80aa004c-98f5-4265-8321-daf6d8132c24","Type":"ContainerStarted","Data":"669258674e4afb9b0e149304e281930cd20ccf2447568c5e32158c5aa25c7284"} Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.493319 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0030-account-create-update-7kfzr" podStartSLOduration=2.493293422 podStartE2EDuration="2.493293422s" podCreationTimestamp="2026-01-26 13:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:20:38.490818622 +0000 UTC m=+2215.424186234" watchObservedRunningTime="2026-01-26 13:20:38.493293422 +0000 UTC m=+2215.426661034" Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.494811 4844 generic.go:334] "Generic (PLEG): container finished" podID="013c2624-05ec-49ef-85e2-5f5e155ee687" containerID="a52d32b289a4c0e01a4126b1de9ef0e5e6fb384989b7d29538c65dbf1bb6f966" exitCode=0 Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.494902 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb4bbcbbd-hnxlf" event={"ID":"013c2624-05ec-49ef-85e2-5f5e155ee687","Type":"ContainerDied","Data":"a52d32b289a4c0e01a4126b1de9ef0e5e6fb384989b7d29538c65dbf1bb6f966"} Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.537145 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d54f-account-create-update-vkjxw" event={"ID":"350afd25-a535-4c5c-9b45-85b457255769","Type":"ContainerStarted","Data":"6c7b03e86844b459b44e8486544be870ded31af9c0b80856aeb6e609961f8293"} Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.537396 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d54f-account-create-update-vkjxw" event={"ID":"350afd25-a535-4c5c-9b45-85b457255769","Type":"ContainerStarted","Data":"077e33057ca9b09a306f94b56ef5280fac4d7fb4f21b5518884b9a1b0495bc60"} Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.567125 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xg8dt" event={"ID":"e1e6f9c3-de48-4504-9b94-bbabcc87fc45","Type":"ContainerStarted","Data":"53d31a38f20640160be24f81f12b860c4fb49014a90855c2be74dd8c724ca30f"} Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.567186 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xg8dt" event={"ID":"e1e6f9c3-de48-4504-9b94-bbabcc87fc45","Type":"ContainerStarted","Data":"6d74c474c9386c148e81d28a55e5018c6132f4258731200fc4d2afffe83adcc8"} Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.614750 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-283b-account-create-update-9wvm2" event={"ID":"45ab14b0-33a9-4364-a552-16b57b9826c5","Type":"ContainerStarted","Data":"f3799052a6007ff7000f2f5af51fbb50a7629f3e69822b58170bbe78e47f1778"} Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.614803 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-283b-account-create-update-9wvm2" event={"ID":"45ab14b0-33a9-4364-a552-16b57b9826c5","Type":"ContainerStarted","Data":"88ce8259bb58a83a8ad667729b9229f110a184e4e756584eb67dd53fa867c6f9"} Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.615628 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-xg8dt" podStartSLOduration=2.615610749 podStartE2EDuration="2.615610749s" podCreationTimestamp="2026-01-26 13:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:20:38.606115919 +0000 UTC m=+2215.539483531" watchObservedRunningTime="2026-01-26 13:20:38.615610749 +0000 UTC m=+2215.548978361" Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.620115 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-d54f-account-create-update-vkjxw" podStartSLOduration=2.620096917 podStartE2EDuration="2.620096917s" podCreationTimestamp="2026-01-26 13:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:20:38.566710777 +0000 UTC m=+2215.500078389" watchObservedRunningTime="2026-01-26 13:20:38.620096917 +0000 UTC m=+2215.553464529" Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.667916 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 13:20:38 crc kubenswrapper[4844]: I0126 13:20:38.685448 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-283b-account-create-update-9wvm2" podStartSLOduration=2.685431387 podStartE2EDuration="2.685431387s" podCreationTimestamp="2026-01-26 13:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:20:38.626246317 +0000 UTC m=+2215.559613929" watchObservedRunningTime="2026-01-26 13:20:38.685431387 +0000 UTC m=+2215.618798999" Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.325898 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2ba6a95-767f-4589-8dc9-e124e9be4fb4" path="/var/lib/kubelet/pods/d2ba6a95-767f-4589-8dc9-e124e9be4fb4/volumes" Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.622666 4844 generic.go:334] "Generic (PLEG): container finished" podID="f2f773df-1a60-4d98-aaf9-25edd517e2e7" containerID="e0262519b155b73755b64de131f5e0324b481c529587ff763040d7d536c1b239" exitCode=0 Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.623018 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0030-account-create-update-7kfzr" event={"ID":"f2f773df-1a60-4d98-aaf9-25edd517e2e7","Type":"ContainerDied","Data":"e0262519b155b73755b64de131f5e0324b481c529587ff763040d7d536c1b239"} Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.631675 4844 generic.go:334] "Generic (PLEG): container finished" podID="e1e6f9c3-de48-4504-9b94-bbabcc87fc45" containerID="53d31a38f20640160be24f81f12b860c4fb49014a90855c2be74dd8c724ca30f" exitCode=0 Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.631756 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xg8dt" event={"ID":"e1e6f9c3-de48-4504-9b94-bbabcc87fc45","Type":"ContainerDied","Data":"53d31a38f20640160be24f81f12b860c4fb49014a90855c2be74dd8c724ca30f"} Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.641741 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.650921 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a34d9864-c377-4ca1-a4fe-512bf9292130","Type":"ContainerStarted","Data":"3c2b2acf3e4efb8a0cf54f0d6dad5d76762e0069b10561455b61634b95bba029"} Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.650967 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a34d9864-c377-4ca1-a4fe-512bf9292130","Type":"ContainerStarted","Data":"5ab58b112777aa996199652b096922b8f6cd3abfc63d35240fbf76986d2b05b6"} Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.670459 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80aa004c-98f5-4265-8321-daf6d8132c24","Type":"ContainerStarted","Data":"f4ed873d07844e5d8877f033b1347e4e2cd4b447cf390ba46d048b6bd2c7028f"} Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.670514 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80aa004c-98f5-4265-8321-daf6d8132c24","Type":"ContainerStarted","Data":"08185805e86068bdcb89060f5bf0ed51e131aa2a717b2d82d6b647ab1a7895fd"} Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.674502 4844 generic.go:334] "Generic (PLEG): container finished" podID="45ab14b0-33a9-4364-a552-16b57b9826c5" containerID="f3799052a6007ff7000f2f5af51fbb50a7629f3e69822b58170bbe78e47f1778" exitCode=0 Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.674672 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-283b-account-create-update-9wvm2" event={"ID":"45ab14b0-33a9-4364-a552-16b57b9826c5","Type":"ContainerDied","Data":"f3799052a6007ff7000f2f5af51fbb50a7629f3e69822b58170bbe78e47f1778"} Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.679249 4844 generic.go:334] "Generic (PLEG): container finished" podID="d3c75b85-b9e8-4d45-93de-018fa9e10eb8" containerID="6986d618c1b78ec057f4069c455ebf61fee56a5b5cea6f809543eb33afd56ea3" exitCode=0 Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.679311 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-b7qvz" event={"ID":"d3c75b85-b9e8-4d45-93de-018fa9e10eb8","Type":"ContainerDied","Data":"6986d618c1b78ec057f4069c455ebf61fee56a5b5cea6f809543eb33afd56ea3"} Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.687209 4844 generic.go:334] "Generic (PLEG): container finished" podID="2ac79d59-b04a-45d5-baa7-8370e8c54045" containerID="52be658d096cb5853ae063f965d2cc3a619b06a04614edb455486aa0f0bceced" exitCode=0 Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.687513 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhzj8" event={"ID":"2ac79d59-b04a-45d5-baa7-8370e8c54045","Type":"ContainerDied","Data":"52be658d096cb5853ae063f965d2cc3a619b06a04614edb455486aa0f0bceced"} Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.687572 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhzj8" event={"ID":"2ac79d59-b04a-45d5-baa7-8370e8c54045","Type":"ContainerStarted","Data":"9a0ec2170a2f35d753d91e9880770717e4fbe268ccbb569c7649ec5ab8f3fd20"} Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.693915 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d54f-account-create-update-vkjxw" event={"ID":"350afd25-a535-4c5c-9b45-85b457255769","Type":"ContainerDied","Data":"6c7b03e86844b459b44e8486544be870ded31af9c0b80856aeb6e609961f8293"} Jan 26 13:20:39 crc kubenswrapper[4844]: I0126 13:20:39.694447 4844 generic.go:334] "Generic (PLEG): container finished" podID="350afd25-a535-4c5c-9b45-85b457255769" containerID="6c7b03e86844b459b44e8486544be870ded31af9c0b80856aeb6e609961f8293" exitCode=0 Jan 26 13:20:40 crc kubenswrapper[4844]: I0126 13:20:40.187985 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tmtvd" Jan 26 13:20:40 crc kubenswrapper[4844]: I0126 13:20:40.218732 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnd29\" (UniqueName: \"kubernetes.io/projected/128a7603-8c83-4c8f-8484-031abaa6bc9a-kube-api-access-cnd29\") pod \"128a7603-8c83-4c8f-8484-031abaa6bc9a\" (UID: \"128a7603-8c83-4c8f-8484-031abaa6bc9a\") " Jan 26 13:20:40 crc kubenswrapper[4844]: I0126 13:20:40.218778 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/128a7603-8c83-4c8f-8484-031abaa6bc9a-operator-scripts\") pod \"128a7603-8c83-4c8f-8484-031abaa6bc9a\" (UID: \"128a7603-8c83-4c8f-8484-031abaa6bc9a\") " Jan 26 13:20:40 crc kubenswrapper[4844]: I0126 13:20:40.219321 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/128a7603-8c83-4c8f-8484-031abaa6bc9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "128a7603-8c83-4c8f-8484-031abaa6bc9a" (UID: "128a7603-8c83-4c8f-8484-031abaa6bc9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:40 crc kubenswrapper[4844]: I0126 13:20:40.223295 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/128a7603-8c83-4c8f-8484-031abaa6bc9a-kube-api-access-cnd29" (OuterVolumeSpecName: "kube-api-access-cnd29") pod "128a7603-8c83-4c8f-8484-031abaa6bc9a" (UID: "128a7603-8c83-4c8f-8484-031abaa6bc9a"). InnerVolumeSpecName "kube-api-access-cnd29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:40 crc kubenswrapper[4844]: I0126 13:20:40.321196 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnd29\" (UniqueName: \"kubernetes.io/projected/128a7603-8c83-4c8f-8484-031abaa6bc9a-kube-api-access-cnd29\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:40 crc kubenswrapper[4844]: I0126 13:20:40.321504 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/128a7603-8c83-4c8f-8484-031abaa6bc9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:40 crc kubenswrapper[4844]: I0126 13:20:40.713741 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tmtvd" event={"ID":"128a7603-8c83-4c8f-8484-031abaa6bc9a","Type":"ContainerDied","Data":"1b928a0116e4079ddab5b5ee1655a779b67a6d9ecf279977d0eb43a1f8fd68a6"} Jan 26 13:20:40 crc kubenswrapper[4844]: I0126 13:20:40.713782 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b928a0116e4079ddab5b5ee1655a779b67a6d9ecf279977d0eb43a1f8fd68a6" Jan 26 13:20:40 crc kubenswrapper[4844]: I0126 13:20:40.714837 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tmtvd" Jan 26 13:20:40 crc kubenswrapper[4844]: I0126 13:20:40.716411 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a34d9864-c377-4ca1-a4fe-512bf9292130","Type":"ContainerStarted","Data":"fb126b873ce298762512ec0f3bce9bb36c671738844abf5a87d6f15660144655"} Jan 26 13:20:40 crc kubenswrapper[4844]: I0126 13:20:40.716559 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 13:20:40 crc kubenswrapper[4844]: I0126 13:20:40.721681 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80aa004c-98f5-4265-8321-daf6d8132c24","Type":"ContainerStarted","Data":"d56047903967d5cce23e20c92cae8ddad5f39ac4f2cd51ecde31da6e601d1ff6"} Jan 26 13:20:40 crc kubenswrapper[4844]: I0126 13:20:40.757381 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.757366318 podStartE2EDuration="3.757366318s" podCreationTimestamp="2026-01-26 13:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:20:40.754331775 +0000 UTC m=+2217.687699377" watchObservedRunningTime="2026-01-26 13:20:40.757366318 +0000 UTC m=+2217.690733930" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.143039 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0030-account-create-update-7kfzr" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.238152 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th5q8\" (UniqueName: \"kubernetes.io/projected/f2f773df-1a60-4d98-aaf9-25edd517e2e7-kube-api-access-th5q8\") pod \"f2f773df-1a60-4d98-aaf9-25edd517e2e7\" (UID: \"f2f773df-1a60-4d98-aaf9-25edd517e2e7\") " Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.238214 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2f773df-1a60-4d98-aaf9-25edd517e2e7-operator-scripts\") pod \"f2f773df-1a60-4d98-aaf9-25edd517e2e7\" (UID: \"f2f773df-1a60-4d98-aaf9-25edd517e2e7\") " Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.238819 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2f773df-1a60-4d98-aaf9-25edd517e2e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2f773df-1a60-4d98-aaf9-25edd517e2e7" (UID: "f2f773df-1a60-4d98-aaf9-25edd517e2e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.239019 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2f773df-1a60-4d98-aaf9-25edd517e2e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.243215 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2f773df-1a60-4d98-aaf9-25edd517e2e7-kube-api-access-th5q8" (OuterVolumeSpecName: "kube-api-access-th5q8") pod "f2f773df-1a60-4d98-aaf9-25edd517e2e7" (UID: "f2f773df-1a60-4d98-aaf9-25edd517e2e7"). InnerVolumeSpecName "kube-api-access-th5q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.347990 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th5q8\" (UniqueName: \"kubernetes.io/projected/f2f773df-1a60-4d98-aaf9-25edd517e2e7-kube-api-access-th5q8\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.400514 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xg8dt" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.413323 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-283b-account-create-update-9wvm2" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.434629 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b7qvz" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.447284 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d54f-account-create-update-vkjxw" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.453270 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkg2s\" (UniqueName: \"kubernetes.io/projected/e1e6f9c3-de48-4504-9b94-bbabcc87fc45-kube-api-access-bkg2s\") pod \"e1e6f9c3-de48-4504-9b94-bbabcc87fc45\" (UID: \"e1e6f9c3-de48-4504-9b94-bbabcc87fc45\") " Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.453322 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn9c4\" (UniqueName: \"kubernetes.io/projected/45ab14b0-33a9-4364-a552-16b57b9826c5-kube-api-access-mn9c4\") pod \"45ab14b0-33a9-4364-a552-16b57b9826c5\" (UID: \"45ab14b0-33a9-4364-a552-16b57b9826c5\") " Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.453392 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ab14b0-33a9-4364-a552-16b57b9826c5-operator-scripts\") pod \"45ab14b0-33a9-4364-a552-16b57b9826c5\" (UID: \"45ab14b0-33a9-4364-a552-16b57b9826c5\") " Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.454341 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1e6f9c3-de48-4504-9b94-bbabcc87fc45-operator-scripts\") pod \"e1e6f9c3-de48-4504-9b94-bbabcc87fc45\" (UID: \"e1e6f9c3-de48-4504-9b94-bbabcc87fc45\") " Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.455642 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e6f9c3-de48-4504-9b94-bbabcc87fc45-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1e6f9c3-de48-4504-9b94-bbabcc87fc45" (UID: "e1e6f9c3-de48-4504-9b94-bbabcc87fc45"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.455936 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ab14b0-33a9-4364-a552-16b57b9826c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45ab14b0-33a9-4364-a552-16b57b9826c5" (UID: "45ab14b0-33a9-4364-a552-16b57b9826c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.457562 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ab14b0-33a9-4364-a552-16b57b9826c5-kube-api-access-mn9c4" (OuterVolumeSpecName: "kube-api-access-mn9c4") pod "45ab14b0-33a9-4364-a552-16b57b9826c5" (UID: "45ab14b0-33a9-4364-a552-16b57b9826c5"). InnerVolumeSpecName "kube-api-access-mn9c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.458778 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e6f9c3-de48-4504-9b94-bbabcc87fc45-kube-api-access-bkg2s" (OuterVolumeSpecName: "kube-api-access-bkg2s") pod "e1e6f9c3-de48-4504-9b94-bbabcc87fc45" (UID: "e1e6f9c3-de48-4504-9b94-bbabcc87fc45"). InnerVolumeSpecName "kube-api-access-bkg2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.556111 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w67z\" (UniqueName: \"kubernetes.io/projected/d3c75b85-b9e8-4d45-93de-018fa9e10eb8-kube-api-access-6w67z\") pod \"d3c75b85-b9e8-4d45-93de-018fa9e10eb8\" (UID: \"d3c75b85-b9e8-4d45-93de-018fa9e10eb8\") " Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.556304 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4xqm\" (UniqueName: \"kubernetes.io/projected/350afd25-a535-4c5c-9b45-85b457255769-kube-api-access-j4xqm\") pod \"350afd25-a535-4c5c-9b45-85b457255769\" (UID: \"350afd25-a535-4c5c-9b45-85b457255769\") " Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.556382 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/350afd25-a535-4c5c-9b45-85b457255769-operator-scripts\") pod \"350afd25-a535-4c5c-9b45-85b457255769\" (UID: \"350afd25-a535-4c5c-9b45-85b457255769\") " Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.556407 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3c75b85-b9e8-4d45-93de-018fa9e10eb8-operator-scripts\") pod \"d3c75b85-b9e8-4d45-93de-018fa9e10eb8\" (UID: \"d3c75b85-b9e8-4d45-93de-018fa9e10eb8\") " Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.556796 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/350afd25-a535-4c5c-9b45-85b457255769-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "350afd25-a535-4c5c-9b45-85b457255769" (UID: "350afd25-a535-4c5c-9b45-85b457255769"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.557041 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3c75b85-b9e8-4d45-93de-018fa9e10eb8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3c75b85-b9e8-4d45-93de-018fa9e10eb8" (UID: "d3c75b85-b9e8-4d45-93de-018fa9e10eb8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.557084 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ab14b0-33a9-4364-a552-16b57b9826c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.557099 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1e6f9c3-de48-4504-9b94-bbabcc87fc45-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.557108 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/350afd25-a535-4c5c-9b45-85b457255769-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.557118 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkg2s\" (UniqueName: \"kubernetes.io/projected/e1e6f9c3-de48-4504-9b94-bbabcc87fc45-kube-api-access-bkg2s\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.557129 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn9c4\" (UniqueName: \"kubernetes.io/projected/45ab14b0-33a9-4364-a552-16b57b9826c5-kube-api-access-mn9c4\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.560037 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3c75b85-b9e8-4d45-93de-018fa9e10eb8-kube-api-access-6w67z" (OuterVolumeSpecName: "kube-api-access-6w67z") pod "d3c75b85-b9e8-4d45-93de-018fa9e10eb8" (UID: "d3c75b85-b9e8-4d45-93de-018fa9e10eb8"). InnerVolumeSpecName "kube-api-access-6w67z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.560077 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/350afd25-a535-4c5c-9b45-85b457255769-kube-api-access-j4xqm" (OuterVolumeSpecName: "kube-api-access-j4xqm") pod "350afd25-a535-4c5c-9b45-85b457255769" (UID: "350afd25-a535-4c5c-9b45-85b457255769"). InnerVolumeSpecName "kube-api-access-j4xqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.658639 4844 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3c75b85-b9e8-4d45-93de-018fa9e10eb8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.658666 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w67z\" (UniqueName: \"kubernetes.io/projected/d3c75b85-b9e8-4d45-93de-018fa9e10eb8-kube-api-access-6w67z\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.658677 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4xqm\" (UniqueName: \"kubernetes.io/projected/350afd25-a535-4c5c-9b45-85b457255769-kube-api-access-j4xqm\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.730937 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xg8dt" event={"ID":"e1e6f9c3-de48-4504-9b94-bbabcc87fc45","Type":"ContainerDied","Data":"6d74c474c9386c148e81d28a55e5018c6132f4258731200fc4d2afffe83adcc8"} Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.730982 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d74c474c9386c148e81d28a55e5018c6132f4258731200fc4d2afffe83adcc8" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.730949 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xg8dt" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.732860 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-283b-account-create-update-9wvm2" event={"ID":"45ab14b0-33a9-4364-a552-16b57b9826c5","Type":"ContainerDied","Data":"88ce8259bb58a83a8ad667729b9229f110a184e4e756584eb67dd53fa867c6f9"} Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.732907 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88ce8259bb58a83a8ad667729b9229f110a184e4e756584eb67dd53fa867c6f9" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.732873 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-283b-account-create-update-9wvm2" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.735099 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-b7qvz" event={"ID":"d3c75b85-b9e8-4d45-93de-018fa9e10eb8","Type":"ContainerDied","Data":"98d82500f5f5d35cb1e33d72ae4c377d3c893c4d283294b99b52ff4293ba1253"} Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.735123 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b7qvz" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.735140 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98d82500f5f5d35cb1e33d72ae4c377d3c893c4d283294b99b52ff4293ba1253" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.737896 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhzj8" event={"ID":"2ac79d59-b04a-45d5-baa7-8370e8c54045","Type":"ContainerStarted","Data":"8abb7e1a40fd3ef7f24db644caebe67b8179d7754deefcc2450d4ccf17a98cc9"} Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.740950 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d54f-account-create-update-vkjxw" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.740965 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d54f-account-create-update-vkjxw" event={"ID":"350afd25-a535-4c5c-9b45-85b457255769","Type":"ContainerDied","Data":"077e33057ca9b09a306f94b56ef5280fac4d7fb4f21b5518884b9a1b0495bc60"} Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.741006 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="077e33057ca9b09a306f94b56ef5280fac4d7fb4f21b5518884b9a1b0495bc60" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.743664 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0030-account-create-update-7kfzr" event={"ID":"f2f773df-1a60-4d98-aaf9-25edd517e2e7","Type":"ContainerDied","Data":"87046097fc14b75aa70fab1b430c62b9d4855acda0d46d65aa82f7b096efd057"} Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.743703 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0030-account-create-update-7kfzr" Jan 26 13:20:41 crc kubenswrapper[4844]: I0126 13:20:41.743715 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87046097fc14b75aa70fab1b430c62b9d4855acda0d46d65aa82f7b096efd057" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.016237 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.072554 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.459662 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.460252 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" containerName="glance-log" containerID="cri-o://df354ad5061d63d79ae83713c4429531193c9b599281b212b18b5aa951055455" gracePeriod=30 Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.460397 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" containerName="glance-httpd" containerID="cri-o://7f1ea68571e5a9daeb4dc8f7339cd361453f01ff144f8bb1af3c8316968318f8" gracePeriod=30 Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.514075 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.583551 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvbvf\" (UniqueName: \"kubernetes.io/projected/013c2624-05ec-49ef-85e2-5f5e155ee687-kube-api-access-fvbvf\") pod \"013c2624-05ec-49ef-85e2-5f5e155ee687\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.583623 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-config\") pod \"013c2624-05ec-49ef-85e2-5f5e155ee687\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.583685 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-ovndb-tls-certs\") pod \"013c2624-05ec-49ef-85e2-5f5e155ee687\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.583839 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-httpd-config\") pod \"013c2624-05ec-49ef-85e2-5f5e155ee687\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.583878 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-combined-ca-bundle\") pod \"013c2624-05ec-49ef-85e2-5f5e155ee687\" (UID: \"013c2624-05ec-49ef-85e2-5f5e155ee687\") " Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.589890 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/013c2624-05ec-49ef-85e2-5f5e155ee687-kube-api-access-fvbvf" (OuterVolumeSpecName: "kube-api-access-fvbvf") pod "013c2624-05ec-49ef-85e2-5f5e155ee687" (UID: "013c2624-05ec-49ef-85e2-5f5e155ee687"). InnerVolumeSpecName "kube-api-access-fvbvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.594651 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "013c2624-05ec-49ef-85e2-5f5e155ee687" (UID: "013c2624-05ec-49ef-85e2-5f5e155ee687"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.647064 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "013c2624-05ec-49ef-85e2-5f5e155ee687" (UID: "013c2624-05ec-49ef-85e2-5f5e155ee687"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.665771 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-config" (OuterVolumeSpecName: "config") pod "013c2624-05ec-49ef-85e2-5f5e155ee687" (UID: "013c2624-05ec-49ef-85e2-5f5e155ee687"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.687557 4844 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.687615 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.687629 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvbvf\" (UniqueName: \"kubernetes.io/projected/013c2624-05ec-49ef-85e2-5f5e155ee687-kube-api-access-fvbvf\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.687639 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.691903 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "013c2624-05ec-49ef-85e2-5f5e155ee687" (UID: "013c2624-05ec-49ef-85e2-5f5e155ee687"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.756797 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80aa004c-98f5-4265-8321-daf6d8132c24","Type":"ContainerStarted","Data":"74f7af7c9d5379d337106062b055dd88f5a20191180577a90d2a22c5d34c333c"} Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.756945 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="80aa004c-98f5-4265-8321-daf6d8132c24" containerName="ceilometer-central-agent" containerID="cri-o://08185805e86068bdcb89060f5bf0ed51e131aa2a717b2d82d6b647ab1a7895fd" gracePeriod=30 Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.757205 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.757521 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="80aa004c-98f5-4265-8321-daf6d8132c24" containerName="proxy-httpd" containerID="cri-o://74f7af7c9d5379d337106062b055dd88f5a20191180577a90d2a22c5d34c333c" gracePeriod=30 Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.757581 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="80aa004c-98f5-4265-8321-daf6d8132c24" containerName="sg-core" containerID="cri-o://d56047903967d5cce23e20c92cae8ddad5f39ac4f2cd51ecde31da6e601d1ff6" gracePeriod=30 Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.757650 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="80aa004c-98f5-4265-8321-daf6d8132c24" containerName="ceilometer-notification-agent" containerID="cri-o://f4ed873d07844e5d8877f033b1347e4e2cd4b447cf390ba46d048b6bd2c7028f" gracePeriod=30 Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.766064 4844 generic.go:334] "Generic (PLEG): container finished" podID="013c2624-05ec-49ef-85e2-5f5e155ee687" containerID="1fe654e73b7eef70afb3917f09a7f33adb2e4eb9d2acce93cdafb3fd839abab1" exitCode=0 Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.766131 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb4bbcbbd-hnxlf" event={"ID":"013c2624-05ec-49ef-85e2-5f5e155ee687","Type":"ContainerDied","Data":"1fe654e73b7eef70afb3917f09a7f33adb2e4eb9d2acce93cdafb3fd839abab1"} Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.766160 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb4bbcbbd-hnxlf" event={"ID":"013c2624-05ec-49ef-85e2-5f5e155ee687","Type":"ContainerDied","Data":"eaad95c642169e35ebde226ca77e36758de2c55054c58a78dd59703d93a31192"} Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.766180 4844 scope.go:117] "RemoveContainer" containerID="a52d32b289a4c0e01a4126b1de9ef0e5e6fb384989b7d29538c65dbf1bb6f966" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.766330 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb4bbcbbd-hnxlf" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.778825 4844 generic.go:334] "Generic (PLEG): container finished" podID="2ac79d59-b04a-45d5-baa7-8370e8c54045" containerID="8abb7e1a40fd3ef7f24db644caebe67b8179d7754deefcc2450d4ccf17a98cc9" exitCode=0 Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.778886 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhzj8" event={"ID":"2ac79d59-b04a-45d5-baa7-8370e8c54045","Type":"ContainerDied","Data":"8abb7e1a40fd3ef7f24db644caebe67b8179d7754deefcc2450d4ccf17a98cc9"} Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.789183 4844 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/013c2624-05ec-49ef-85e2-5f5e155ee687-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.794504 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.589569269 podStartE2EDuration="6.794484057s" podCreationTimestamp="2026-01-26 13:20:36 +0000 UTC" firstStartedPulling="2026-01-26 13:20:37.68497708 +0000 UTC m=+2214.618344692" lastFinishedPulling="2026-01-26 13:20:41.889891858 +0000 UTC m=+2218.823259480" observedRunningTime="2026-01-26 13:20:42.787353144 +0000 UTC m=+2219.720720766" watchObservedRunningTime="2026-01-26 13:20:42.794484057 +0000 UTC m=+2219.727851669" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.799395 4844 generic.go:334] "Generic (PLEG): container finished" podID="48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" containerID="df354ad5061d63d79ae83713c4429531193c9b599281b212b18b5aa951055455" exitCode=143 Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.799674 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2","Type":"ContainerDied","Data":"df354ad5061d63d79ae83713c4429531193c9b599281b212b18b5aa951055455"} Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.800378 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.833728 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bb4bbcbbd-hnxlf"] Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.841357 4844 scope.go:117] "RemoveContainer" containerID="1fe654e73b7eef70afb3917f09a7f33adb2e4eb9d2acce93cdafb3fd839abab1" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.850770 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bb4bbcbbd-hnxlf"] Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.875505 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.899655 4844 scope.go:117] "RemoveContainer" containerID="a52d32b289a4c0e01a4126b1de9ef0e5e6fb384989b7d29538c65dbf1bb6f966" Jan 26 13:20:42 crc kubenswrapper[4844]: E0126 13:20:42.902750 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a52d32b289a4c0e01a4126b1de9ef0e5e6fb384989b7d29538c65dbf1bb6f966\": container with ID starting with a52d32b289a4c0e01a4126b1de9ef0e5e6fb384989b7d29538c65dbf1bb6f966 not found: ID does not exist" containerID="a52d32b289a4c0e01a4126b1de9ef0e5e6fb384989b7d29538c65dbf1bb6f966" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.902796 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a52d32b289a4c0e01a4126b1de9ef0e5e6fb384989b7d29538c65dbf1bb6f966"} err="failed to get container status \"a52d32b289a4c0e01a4126b1de9ef0e5e6fb384989b7d29538c65dbf1bb6f966\": rpc error: code = NotFound desc = could not find container \"a52d32b289a4c0e01a4126b1de9ef0e5e6fb384989b7d29538c65dbf1bb6f966\": container with ID starting with a52d32b289a4c0e01a4126b1de9ef0e5e6fb384989b7d29538c65dbf1bb6f966 not found: ID does not exist" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.902826 4844 scope.go:117] "RemoveContainer" containerID="1fe654e73b7eef70afb3917f09a7f33adb2e4eb9d2acce93cdafb3fd839abab1" Jan 26 13:20:42 crc kubenswrapper[4844]: E0126 13:20:42.903966 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fe654e73b7eef70afb3917f09a7f33adb2e4eb9d2acce93cdafb3fd839abab1\": container with ID starting with 1fe654e73b7eef70afb3917f09a7f33adb2e4eb9d2acce93cdafb3fd839abab1 not found: ID does not exist" containerID="1fe654e73b7eef70afb3917f09a7f33adb2e4eb9d2acce93cdafb3fd839abab1" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.904004 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fe654e73b7eef70afb3917f09a7f33adb2e4eb9d2acce93cdafb3fd839abab1"} err="failed to get container status \"1fe654e73b7eef70afb3917f09a7f33adb2e4eb9d2acce93cdafb3fd839abab1\": rpc error: code = NotFound desc = could not find container \"1fe654e73b7eef70afb3917f09a7f33adb2e4eb9d2acce93cdafb3fd839abab1\": container with ID starting with 1fe654e73b7eef70afb3917f09a7f33adb2e4eb9d2acce93cdafb3fd839abab1 not found: ID does not exist" Jan 26 13:20:42 crc kubenswrapper[4844]: I0126 13:20:42.936902 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.338948 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="013c2624-05ec-49ef-85e2-5f5e155ee687" path="/var/lib/kubelet/pods/013c2624-05ec-49ef-85e2-5f5e155ee687/volumes" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.362252 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.185:9292/healthcheck\": read tcp 10.217.0.2:41050->10.217.0.185:9292: read: connection reset by peer" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.362273 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.185:9292/healthcheck\": read tcp 10.217.0.2:41052->10.217.0.185:9292: read: connection reset by peer" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.771237 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.812773 4844 generic.go:334] "Generic (PLEG): container finished" podID="80aa004c-98f5-4265-8321-daf6d8132c24" containerID="74f7af7c9d5379d337106062b055dd88f5a20191180577a90d2a22c5d34c333c" exitCode=0 Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.812807 4844 generic.go:334] "Generic (PLEG): container finished" podID="80aa004c-98f5-4265-8321-daf6d8132c24" containerID="d56047903967d5cce23e20c92cae8ddad5f39ac4f2cd51ecde31da6e601d1ff6" exitCode=2 Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.812815 4844 generic.go:334] "Generic (PLEG): container finished" podID="80aa004c-98f5-4265-8321-daf6d8132c24" containerID="f4ed873d07844e5d8877f033b1347e4e2cd4b447cf390ba46d048b6bd2c7028f" exitCode=0 Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.812878 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80aa004c-98f5-4265-8321-daf6d8132c24","Type":"ContainerDied","Data":"74f7af7c9d5379d337106062b055dd88f5a20191180577a90d2a22c5d34c333c"} Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.812907 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80aa004c-98f5-4265-8321-daf6d8132c24","Type":"ContainerDied","Data":"d56047903967d5cce23e20c92cae8ddad5f39ac4f2cd51ecde31da6e601d1ff6"} Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.812917 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80aa004c-98f5-4265-8321-daf6d8132c24","Type":"ContainerDied","Data":"f4ed873d07844e5d8877f033b1347e4e2cd4b447cf390ba46d048b6bd2c7028f"} Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.813459 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-logs\") pod \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.813587 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-config-data\") pod \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.813785 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-httpd-run\") pod \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.813874 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-scripts\") pod \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.813948 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-combined-ca-bundle\") pod \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.814035 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttt26\" (UniqueName: \"kubernetes.io/projected/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-kube-api-access-ttt26\") pod \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.814138 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-internal-tls-certs\") pod \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.814332 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\" (UID: \"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2\") " Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.813951 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-logs" (OuterVolumeSpecName: "logs") pod "48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" (UID: "48790dbd-c7a3-48f0-a3a8-a8685a07f9d2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.814262 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" (UID: "48790dbd-c7a3-48f0-a3a8-a8685a07f9d2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.825758 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-scripts" (OuterVolumeSpecName: "scripts") pod "48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" (UID: "48790dbd-c7a3-48f0-a3a8-a8685a07f9d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.826244 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-kube-api-access-ttt26" (OuterVolumeSpecName: "kube-api-access-ttt26") pod "48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" (UID: "48790dbd-c7a3-48f0-a3a8-a8685a07f9d2"). InnerVolumeSpecName "kube-api-access-ttt26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.829197 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" (UID: "48790dbd-c7a3-48f0-a3a8-a8685a07f9d2"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.830669 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhzj8" event={"ID":"2ac79d59-b04a-45d5-baa7-8370e8c54045","Type":"ContainerStarted","Data":"2ccddd78124473e03eec5846d28f30a57dd412904350fef4f0d21741323705f4"} Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.833315 4844 generic.go:334] "Generic (PLEG): container finished" podID="48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" containerID="7f1ea68571e5a9daeb4dc8f7339cd361453f01ff144f8bb1af3c8316968318f8" exitCode=0 Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.833386 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.833429 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2","Type":"ContainerDied","Data":"7f1ea68571e5a9daeb4dc8f7339cd361453f01ff144f8bb1af3c8316968318f8"} Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.833450 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"48790dbd-c7a3-48f0-a3a8-a8685a07f9d2","Type":"ContainerDied","Data":"e130fa15b1789560ff24f2e05e66fa5b4cb3716ad44fc8cf1aa9a22de574661a"} Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.833465 4844 scope.go:117] "RemoveContainer" containerID="7f1ea68571e5a9daeb4dc8f7339cd361453f01ff144f8bb1af3c8316968318f8" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.879527 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" (UID: "48790dbd-c7a3-48f0-a3a8-a8685a07f9d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.880985 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dhzj8" podStartSLOduration=3.263987921 podStartE2EDuration="6.880967163s" podCreationTimestamp="2026-01-26 13:20:37 +0000 UTC" firstStartedPulling="2026-01-26 13:20:39.697874604 +0000 UTC m=+2216.631242206" lastFinishedPulling="2026-01-26 13:20:43.314853836 +0000 UTC m=+2220.248221448" observedRunningTime="2026-01-26 13:20:43.852774832 +0000 UTC m=+2220.786142454" watchObservedRunningTime="2026-01-26 13:20:43.880967163 +0000 UTC m=+2220.814334775" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.882890 4844 scope.go:117] "RemoveContainer" containerID="df354ad5061d63d79ae83713c4429531193c9b599281b212b18b5aa951055455" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.918847 4844 scope.go:117] "RemoveContainer" containerID="7f1ea68571e5a9daeb4dc8f7339cd361453f01ff144f8bb1af3c8316968318f8" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.919034 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-config-data" (OuterVolumeSpecName: "config-data") pod "48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" (UID: "48790dbd-c7a3-48f0-a3a8-a8685a07f9d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.919995 4844 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.920012 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.920021 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:43 crc kubenswrapper[4844]: E0126 13:20:43.920014 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f1ea68571e5a9daeb4dc8f7339cd361453f01ff144f8bb1af3c8316968318f8\": container with ID starting with 7f1ea68571e5a9daeb4dc8f7339cd361453f01ff144f8bb1af3c8316968318f8 not found: ID does not exist" containerID="7f1ea68571e5a9daeb4dc8f7339cd361453f01ff144f8bb1af3c8316968318f8" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.920046 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f1ea68571e5a9daeb4dc8f7339cd361453f01ff144f8bb1af3c8316968318f8"} err="failed to get container status \"7f1ea68571e5a9daeb4dc8f7339cd361453f01ff144f8bb1af3c8316968318f8\": rpc error: code = NotFound desc = could not find container \"7f1ea68571e5a9daeb4dc8f7339cd361453f01ff144f8bb1af3c8316968318f8\": container with ID starting with 7f1ea68571e5a9daeb4dc8f7339cd361453f01ff144f8bb1af3c8316968318f8 not found: ID does not exist" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.920070 4844 scope.go:117] "RemoveContainer" containerID="df354ad5061d63d79ae83713c4429531193c9b599281b212b18b5aa951055455" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.920029 4844 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.920120 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.920131 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.920184 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttt26\" (UniqueName: \"kubernetes.io/projected/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-kube-api-access-ttt26\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:43 crc kubenswrapper[4844]: E0126 13:20:43.921422 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df354ad5061d63d79ae83713c4429531193c9b599281b212b18b5aa951055455\": container with ID starting with df354ad5061d63d79ae83713c4429531193c9b599281b212b18b5aa951055455 not found: ID does not exist" containerID="df354ad5061d63d79ae83713c4429531193c9b599281b212b18b5aa951055455" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.921448 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df354ad5061d63d79ae83713c4429531193c9b599281b212b18b5aa951055455"} err="failed to get container status \"df354ad5061d63d79ae83713c4429531193c9b599281b212b18b5aa951055455\": rpc error: code = NotFound desc = could not find container \"df354ad5061d63d79ae83713c4429531193c9b599281b212b18b5aa951055455\": container with ID starting with df354ad5061d63d79ae83713c4429531193c9b599281b212b18b5aa951055455 not found: ID does not exist" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.936289 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" (UID: "48790dbd-c7a3-48f0-a3a8-a8685a07f9d2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:43 crc kubenswrapper[4844]: I0126 13:20:43.942507 4844 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.021646 4844 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.021678 4844 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.165260 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.174289 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198183 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:20:44 crc kubenswrapper[4844]: E0126 13:20:44.198532 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" containerName="glance-log" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198548 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" containerName="glance-log" Jan 26 13:20:44 crc kubenswrapper[4844]: E0126 13:20:44.198560 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ab14b0-33a9-4364-a552-16b57b9826c5" containerName="mariadb-account-create-update" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198567 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ab14b0-33a9-4364-a552-16b57b9826c5" containerName="mariadb-account-create-update" Jan 26 13:20:44 crc kubenswrapper[4844]: E0126 13:20:44.198577 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013c2624-05ec-49ef-85e2-5f5e155ee687" containerName="neutron-api" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198583 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="013c2624-05ec-49ef-85e2-5f5e155ee687" containerName="neutron-api" Jan 26 13:20:44 crc kubenswrapper[4844]: E0126 13:20:44.198605 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128a7603-8c83-4c8f-8484-031abaa6bc9a" containerName="mariadb-database-create" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198611 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="128a7603-8c83-4c8f-8484-031abaa6bc9a" containerName="mariadb-database-create" Jan 26 13:20:44 crc kubenswrapper[4844]: E0126 13:20:44.198620 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3c75b85-b9e8-4d45-93de-018fa9e10eb8" containerName="mariadb-database-create" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198625 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c75b85-b9e8-4d45-93de-018fa9e10eb8" containerName="mariadb-database-create" Jan 26 13:20:44 crc kubenswrapper[4844]: E0126 13:20:44.198634 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e6f9c3-de48-4504-9b94-bbabcc87fc45" containerName="mariadb-database-create" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198639 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e6f9c3-de48-4504-9b94-bbabcc87fc45" containerName="mariadb-database-create" Jan 26 13:20:44 crc kubenswrapper[4844]: E0126 13:20:44.198650 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" containerName="glance-httpd" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198657 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" containerName="glance-httpd" Jan 26 13:20:44 crc kubenswrapper[4844]: E0126 13:20:44.198674 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013c2624-05ec-49ef-85e2-5f5e155ee687" containerName="neutron-httpd" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198679 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="013c2624-05ec-49ef-85e2-5f5e155ee687" containerName="neutron-httpd" Jan 26 13:20:44 crc kubenswrapper[4844]: E0126 13:20:44.198697 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f773df-1a60-4d98-aaf9-25edd517e2e7" containerName="mariadb-account-create-update" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198705 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f773df-1a60-4d98-aaf9-25edd517e2e7" containerName="mariadb-account-create-update" Jan 26 13:20:44 crc kubenswrapper[4844]: E0126 13:20:44.198722 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350afd25-a535-4c5c-9b45-85b457255769" containerName="mariadb-account-create-update" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198728 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="350afd25-a535-4c5c-9b45-85b457255769" containerName="mariadb-account-create-update" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198887 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="45ab14b0-33a9-4364-a552-16b57b9826c5" containerName="mariadb-account-create-update" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198897 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="013c2624-05ec-49ef-85e2-5f5e155ee687" containerName="neutron-api" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198907 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="128a7603-8c83-4c8f-8484-031abaa6bc9a" containerName="mariadb-database-create" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198922 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e6f9c3-de48-4504-9b94-bbabcc87fc45" containerName="mariadb-database-create" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198930 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" containerName="glance-log" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198939 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2f773df-1a60-4d98-aaf9-25edd517e2e7" containerName="mariadb-account-create-update" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198950 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" containerName="glance-httpd" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198960 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="013c2624-05ec-49ef-85e2-5f5e155ee687" containerName="neutron-httpd" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198975 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3c75b85-b9e8-4d45-93de-018fa9e10eb8" containerName="mariadb-database-create" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.198981 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="350afd25-a535-4c5c-9b45-85b457255769" containerName="mariadb-account-create-update" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.199900 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.202246 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.205067 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.214612 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.327812 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqc78\" (UniqueName: \"kubernetes.io/projected/403b5928-19b1-4dfd-97c9-75079d7de60e-kube-api-access-bqc78\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.327942 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/403b5928-19b1-4dfd-97c9-75079d7de60e-logs\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.328016 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/403b5928-19b1-4dfd-97c9-75079d7de60e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.328199 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.328341 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/403b5928-19b1-4dfd-97c9-75079d7de60e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.328464 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/403b5928-19b1-4dfd-97c9-75079d7de60e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.328570 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/403b5928-19b1-4dfd-97c9-75079d7de60e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.328668 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/403b5928-19b1-4dfd-97c9-75079d7de60e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.431769 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/403b5928-19b1-4dfd-97c9-75079d7de60e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.431835 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/403b5928-19b1-4dfd-97c9-75079d7de60e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.431873 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/403b5928-19b1-4dfd-97c9-75079d7de60e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.431900 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/403b5928-19b1-4dfd-97c9-75079d7de60e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.431929 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqc78\" (UniqueName: \"kubernetes.io/projected/403b5928-19b1-4dfd-97c9-75079d7de60e-kube-api-access-bqc78\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.431965 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/403b5928-19b1-4dfd-97c9-75079d7de60e-logs\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.431982 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/403b5928-19b1-4dfd-97c9-75079d7de60e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.432046 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.432439 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.432443 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/403b5928-19b1-4dfd-97c9-75079d7de60e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.434676 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/403b5928-19b1-4dfd-97c9-75079d7de60e-logs\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.438067 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/403b5928-19b1-4dfd-97c9-75079d7de60e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.438435 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/403b5928-19b1-4dfd-97c9-75079d7de60e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.438496 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/403b5928-19b1-4dfd-97c9-75079d7de60e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.439428 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/403b5928-19b1-4dfd-97c9-75079d7de60e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.463285 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqc78\" (UniqueName: \"kubernetes.io/projected/403b5928-19b1-4dfd-97c9-75079d7de60e-kube-api-access-bqc78\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.497211 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"403b5928-19b1-4dfd-97c9-75079d7de60e\") " pod="openstack/glance-default-internal-api-0" Jan 26 13:20:44 crc kubenswrapper[4844]: I0126 13:20:44.531081 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.487361 4844 generic.go:334] "Generic (PLEG): container finished" podID="80aa004c-98f5-4265-8321-daf6d8132c24" containerID="08185805e86068bdcb89060f5bf0ed51e131aa2a717b2d82d6b647ab1a7895fd" exitCode=0 Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.501731 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="ed782618-8b69-4456-9aec-5184e765960f" containerName="watcher-decision-engine" containerID="cri-o://f778593c77f19cd971369cd93f107ce9557b6ff677fcdb7bf966fe9cde611212" gracePeriod=30 Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.489879 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48790dbd-c7a3-48f0-a3a8-a8685a07f9d2" path="/var/lib/kubelet/pods/48790dbd-c7a3-48f0-a3a8-a8685a07f9d2/volumes" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.502787 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.502815 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80aa004c-98f5-4265-8321-daf6d8132c24","Type":"ContainerDied","Data":"08185805e86068bdcb89060f5bf0ed51e131aa2a717b2d82d6b647ab1a7895fd"} Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.502835 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.502845 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80aa004c-98f5-4265-8321-daf6d8132c24","Type":"ContainerDied","Data":"669258674e4afb9b0e149304e281930cd20ccf2447568c5e32158c5aa25c7284"} Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.502856 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="669258674e4afb9b0e149304e281930cd20ccf2447568c5e32158c5aa25c7284" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.518139 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.539659 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.568413 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.666330 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-config-data\") pod \"80aa004c-98f5-4265-8321-daf6d8132c24\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.666432 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-combined-ca-bundle\") pod \"80aa004c-98f5-4265-8321-daf6d8132c24\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.666459 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6trlj\" (UniqueName: \"kubernetes.io/projected/80aa004c-98f5-4265-8321-daf6d8132c24-kube-api-access-6trlj\") pod \"80aa004c-98f5-4265-8321-daf6d8132c24\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.666546 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-sg-core-conf-yaml\") pod \"80aa004c-98f5-4265-8321-daf6d8132c24\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.666706 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-scripts\") pod \"80aa004c-98f5-4265-8321-daf6d8132c24\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.666760 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80aa004c-98f5-4265-8321-daf6d8132c24-run-httpd\") pod \"80aa004c-98f5-4265-8321-daf6d8132c24\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.666802 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80aa004c-98f5-4265-8321-daf6d8132c24-log-httpd\") pod \"80aa004c-98f5-4265-8321-daf6d8132c24\" (UID: \"80aa004c-98f5-4265-8321-daf6d8132c24\") " Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.668350 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80aa004c-98f5-4265-8321-daf6d8132c24-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "80aa004c-98f5-4265-8321-daf6d8132c24" (UID: "80aa004c-98f5-4265-8321-daf6d8132c24"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.669070 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80aa004c-98f5-4265-8321-daf6d8132c24-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "80aa004c-98f5-4265-8321-daf6d8132c24" (UID: "80aa004c-98f5-4265-8321-daf6d8132c24"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.672474 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-scripts" (OuterVolumeSpecName: "scripts") pod "80aa004c-98f5-4265-8321-daf6d8132c24" (UID: "80aa004c-98f5-4265-8321-daf6d8132c24"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.672827 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80aa004c-98f5-4265-8321-daf6d8132c24-kube-api-access-6trlj" (OuterVolumeSpecName: "kube-api-access-6trlj") pod "80aa004c-98f5-4265-8321-daf6d8132c24" (UID: "80aa004c-98f5-4265-8321-daf6d8132c24"). InnerVolumeSpecName "kube-api-access-6trlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.699369 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "80aa004c-98f5-4265-8321-daf6d8132c24" (UID: "80aa004c-98f5-4265-8321-daf6d8132c24"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.763672 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80aa004c-98f5-4265-8321-daf6d8132c24" (UID: "80aa004c-98f5-4265-8321-daf6d8132c24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.768706 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.768741 4844 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80aa004c-98f5-4265-8321-daf6d8132c24-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.768750 4844 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80aa004c-98f5-4265-8321-daf6d8132c24-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.768761 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.768771 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6trlj\" (UniqueName: \"kubernetes.io/projected/80aa004c-98f5-4265-8321-daf6d8132c24-kube-api-access-6trlj\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.768781 4844 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.803470 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-config-data" (OuterVolumeSpecName: "config-data") pod "80aa004c-98f5-4265-8321-daf6d8132c24" (UID: "80aa004c-98f5-4265-8321-daf6d8132c24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:45 crc kubenswrapper[4844]: I0126 13:20:45.883459 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80aa004c-98f5-4265-8321-daf6d8132c24-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:46 crc kubenswrapper[4844]: W0126 13:20:46.007660 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod403b5928_19b1_4dfd_97c9_75079d7de60e.slice/crio-fc9d62326dd05c219911034183d149d76d3c47584847acd8da5344a1749533ec WatchSource:0}: Error finding container fc9d62326dd05c219911034183d149d76d3c47584847acd8da5344a1749533ec: Status 404 returned error can't find the container with id fc9d62326dd05c219911034183d149d76d3c47584847acd8da5344a1749533ec Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.015343 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.512665 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.512929 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"403b5928-19b1-4dfd-97c9-75079d7de60e","Type":"ContainerStarted","Data":"fc9d62326dd05c219911034183d149d76d3c47584847acd8da5344a1749533ec"} Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.512973 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.512987 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.537351 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zzp9q"] Jan 26 13:20:46 crc kubenswrapper[4844]: E0126 13:20:46.537806 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80aa004c-98f5-4265-8321-daf6d8132c24" containerName="proxy-httpd" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.537829 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="80aa004c-98f5-4265-8321-daf6d8132c24" containerName="proxy-httpd" Jan 26 13:20:46 crc kubenswrapper[4844]: E0126 13:20:46.537845 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80aa004c-98f5-4265-8321-daf6d8132c24" containerName="ceilometer-notification-agent" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.537853 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="80aa004c-98f5-4265-8321-daf6d8132c24" containerName="ceilometer-notification-agent" Jan 26 13:20:46 crc kubenswrapper[4844]: E0126 13:20:46.537874 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80aa004c-98f5-4265-8321-daf6d8132c24" containerName="ceilometer-central-agent" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.537881 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="80aa004c-98f5-4265-8321-daf6d8132c24" containerName="ceilometer-central-agent" Jan 26 13:20:46 crc kubenswrapper[4844]: E0126 13:20:46.537897 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80aa004c-98f5-4265-8321-daf6d8132c24" containerName="sg-core" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.537904 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="80aa004c-98f5-4265-8321-daf6d8132c24" containerName="sg-core" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.538104 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="80aa004c-98f5-4265-8321-daf6d8132c24" containerName="sg-core" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.538114 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="80aa004c-98f5-4265-8321-daf6d8132c24" containerName="ceilometer-notification-agent" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.538127 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="80aa004c-98f5-4265-8321-daf6d8132c24" containerName="proxy-httpd" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.538147 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="80aa004c-98f5-4265-8321-daf6d8132c24" containerName="ceilometer-central-agent" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.538784 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zzp9q" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.545635 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zzp9q"] Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.548723 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.548733 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.548980 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-gbg8x" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.563991 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.591932 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.613560 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.618625 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.621973 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.632077 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.634628 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.698882 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe51c360-570b-4e53-9594-271a306efe47-scripts\") pod \"nova-cell0-conductor-db-sync-zzp9q\" (UID: \"fe51c360-570b-4e53-9594-271a306efe47\") " pod="openstack/nova-cell0-conductor-db-sync-zzp9q" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.698973 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe51c360-570b-4e53-9594-271a306efe47-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zzp9q\" (UID: \"fe51c360-570b-4e53-9594-271a306efe47\") " pod="openstack/nova-cell0-conductor-db-sync-zzp9q" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.699020 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng94v\" (UniqueName: \"kubernetes.io/projected/fe51c360-570b-4e53-9594-271a306efe47-kube-api-access-ng94v\") pod \"nova-cell0-conductor-db-sync-zzp9q\" (UID: \"fe51c360-570b-4e53-9594-271a306efe47\") " pod="openstack/nova-cell0-conductor-db-sync-zzp9q" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.699093 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe51c360-570b-4e53-9594-271a306efe47-config-data\") pod \"nova-cell0-conductor-db-sync-zzp9q\" (UID: \"fe51c360-570b-4e53-9594-271a306efe47\") " pod="openstack/nova-cell0-conductor-db-sync-zzp9q" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.800926 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.800971 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe51c360-570b-4e53-9594-271a306efe47-config-data\") pod \"nova-cell0-conductor-db-sync-zzp9q\" (UID: \"fe51c360-570b-4e53-9594-271a306efe47\") " pod="openstack/nova-cell0-conductor-db-sync-zzp9q" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.800992 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.801047 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/576441ec-c5e3-4312-88c8-b256308a1490-log-httpd\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.801115 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-config-data\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.801146 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe51c360-570b-4e53-9594-271a306efe47-scripts\") pod \"nova-cell0-conductor-db-sync-zzp9q\" (UID: \"fe51c360-570b-4e53-9594-271a306efe47\") " pod="openstack/nova-cell0-conductor-db-sync-zzp9q" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.801176 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vcnb\" (UniqueName: \"kubernetes.io/projected/576441ec-c5e3-4312-88c8-b256308a1490-kube-api-access-2vcnb\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.801195 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-scripts\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.801381 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe51c360-570b-4e53-9594-271a306efe47-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zzp9q\" (UID: \"fe51c360-570b-4e53-9594-271a306efe47\") " pod="openstack/nova-cell0-conductor-db-sync-zzp9q" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.801511 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng94v\" (UniqueName: \"kubernetes.io/projected/fe51c360-570b-4e53-9594-271a306efe47-kube-api-access-ng94v\") pod \"nova-cell0-conductor-db-sync-zzp9q\" (UID: \"fe51c360-570b-4e53-9594-271a306efe47\") " pod="openstack/nova-cell0-conductor-db-sync-zzp9q" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.801547 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/576441ec-c5e3-4312-88c8-b256308a1490-run-httpd\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.812095 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe51c360-570b-4e53-9594-271a306efe47-scripts\") pod \"nova-cell0-conductor-db-sync-zzp9q\" (UID: \"fe51c360-570b-4e53-9594-271a306efe47\") " pod="openstack/nova-cell0-conductor-db-sync-zzp9q" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.812438 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe51c360-570b-4e53-9594-271a306efe47-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zzp9q\" (UID: \"fe51c360-570b-4e53-9594-271a306efe47\") " pod="openstack/nova-cell0-conductor-db-sync-zzp9q" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.815042 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe51c360-570b-4e53-9594-271a306efe47-config-data\") pod \"nova-cell0-conductor-db-sync-zzp9q\" (UID: \"fe51c360-570b-4e53-9594-271a306efe47\") " pod="openstack/nova-cell0-conductor-db-sync-zzp9q" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.819005 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng94v\" (UniqueName: \"kubernetes.io/projected/fe51c360-570b-4e53-9594-271a306efe47-kube-api-access-ng94v\") pod \"nova-cell0-conductor-db-sync-zzp9q\" (UID: \"fe51c360-570b-4e53-9594-271a306efe47\") " pod="openstack/nova-cell0-conductor-db-sync-zzp9q" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.857532 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zzp9q" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.903229 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-scripts\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.903373 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/576441ec-c5e3-4312-88c8-b256308a1490-run-httpd\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.903438 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.903467 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.903536 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/576441ec-c5e3-4312-88c8-b256308a1490-log-httpd\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.903657 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-config-data\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.903744 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vcnb\" (UniqueName: \"kubernetes.io/projected/576441ec-c5e3-4312-88c8-b256308a1490-kube-api-access-2vcnb\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.906889 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/576441ec-c5e3-4312-88c8-b256308a1490-log-httpd\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.907988 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.908311 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/576441ec-c5e3-4312-88c8-b256308a1490-run-httpd\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.912698 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-config-data\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.912815 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-scripts\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.915032 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.926339 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vcnb\" (UniqueName: \"kubernetes.io/projected/576441ec-c5e3-4312-88c8-b256308a1490-kube-api-access-2vcnb\") pod \"ceilometer-0\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " pod="openstack/ceilometer-0" Jan 26 13:20:46 crc kubenswrapper[4844]: I0126 13:20:46.950338 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:20:47 crc kubenswrapper[4844]: I0126 13:20:47.333445 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80aa004c-98f5-4265-8321-daf6d8132c24" path="/var/lib/kubelet/pods/80aa004c-98f5-4265-8321-daf6d8132c24/volumes" Jan 26 13:20:47 crc kubenswrapper[4844]: I0126 13:20:47.373324 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zzp9q"] Jan 26 13:20:47 crc kubenswrapper[4844]: W0126 13:20:47.373765 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe51c360_570b_4e53_9594_271a306efe47.slice/crio-d4744f9760f0f8aa4503b2ea1313753de6f3bd1e34e13f6335be7171a7a7c6f6 WatchSource:0}: Error finding container d4744f9760f0f8aa4503b2ea1313753de6f3bd1e34e13f6335be7171a7a7c6f6: Status 404 returned error can't find the container with id d4744f9760f0f8aa4503b2ea1313753de6f3bd1e34e13f6335be7171a7a7c6f6 Jan 26 13:20:47 crc kubenswrapper[4844]: I0126 13:20:47.513897 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:20:47 crc kubenswrapper[4844]: I0126 13:20:47.528481 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"403b5928-19b1-4dfd-97c9-75079d7de60e","Type":"ContainerStarted","Data":"2f2c2483b5158ded517f083e02aa28c1df37b88e4796c7d6a6e4bc415a05bbb5"} Jan 26 13:20:47 crc kubenswrapper[4844]: I0126 13:20:47.532004 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zzp9q" event={"ID":"fe51c360-570b-4e53-9594-271a306efe47","Type":"ContainerStarted","Data":"d4744f9760f0f8aa4503b2ea1313753de6f3bd1e34e13f6335be7171a7a7c6f6"} Jan 26 13:20:47 crc kubenswrapper[4844]: I0126 13:20:47.533852 4844 generic.go:334] "Generic (PLEG): container finished" podID="ed782618-8b69-4456-9aec-5184e765960f" containerID="f778593c77f19cd971369cd93f107ce9557b6ff677fcdb7bf966fe9cde611212" exitCode=0 Jan 26 13:20:47 crc kubenswrapper[4844]: I0126 13:20:47.533932 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ed782618-8b69-4456-9aec-5184e765960f","Type":"ContainerDied","Data":"f778593c77f19cd971369cd93f107ce9557b6ff677fcdb7bf966fe9cde611212"} Jan 26 13:20:47 crc kubenswrapper[4844]: I0126 13:20:47.534003 4844 scope.go:117] "RemoveContainer" containerID="f40661e9cae1344ff8df85b9eb11c5a53401a5c8932da25e88f55fc3d9a6f8f8" Jan 26 13:20:47 crc kubenswrapper[4844]: I0126 13:20:47.873841 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dhzj8" Jan 26 13:20:47 crc kubenswrapper[4844]: I0126 13:20:47.874107 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dhzj8" Jan 26 13:20:48 crc kubenswrapper[4844]: I0126 13:20:48.549317 4844 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 13:20:48 crc kubenswrapper[4844]: I0126 13:20:48.549616 4844 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 13:20:48 crc kubenswrapper[4844]: I0126 13:20:48.549474 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"576441ec-c5e3-4312-88c8-b256308a1490","Type":"ContainerStarted","Data":"acacf38ee15a562053800146e31b2a4da872ebf1971424f43640881e55beef29"} Jan 26 13:20:48 crc kubenswrapper[4844]: I0126 13:20:48.944161 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dhzj8" podUID="2ac79d59-b04a-45d5-baa7-8370e8c54045" containerName="registry-server" probeResult="failure" output=< Jan 26 13:20:48 crc kubenswrapper[4844]: timeout: failed to connect service ":50051" within 1s Jan 26 13:20:48 crc kubenswrapper[4844]: > Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.079323 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.598172 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.598336 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ed782618-8b69-4456-9aec-5184e765960f","Type":"ContainerDied","Data":"f6b2df85a64bb107e6bc87c6ada5f34f22972002638f7b5343530151b9f82742"} Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.598355 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6b2df85a64bb107e6bc87c6ada5f34f22972002638f7b5343530151b9f82742" Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.598364 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"403b5928-19b1-4dfd-97c9-75079d7de60e","Type":"ContainerStarted","Data":"f1aadfc012795ec388389384c9bba4382312bbce054ad7e14a045105c0e93f5c"} Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.598373 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"576441ec-c5e3-4312-88c8-b256308a1490","Type":"ContainerStarted","Data":"8adff110338ed64de77938ca2d6e4f93f4d0e5c41bcddabb38c4e67a7306077d"} Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.637641 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.637622012 podStartE2EDuration="5.637622012s" podCreationTimestamp="2026-01-26 13:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:20:49.597235165 +0000 UTC m=+2226.530602777" watchObservedRunningTime="2026-01-26 13:20:49.637622012 +0000 UTC m=+2226.570989624" Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.701562 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.776917 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed782618-8b69-4456-9aec-5184e765960f-combined-ca-bundle\") pod \"ed782618-8b69-4456-9aec-5184e765960f\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.777011 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed782618-8b69-4456-9aec-5184e765960f-logs\") pod \"ed782618-8b69-4456-9aec-5184e765960f\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.777070 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d849\" (UniqueName: \"kubernetes.io/projected/ed782618-8b69-4456-9aec-5184e765960f-kube-api-access-8d849\") pod \"ed782618-8b69-4456-9aec-5184e765960f\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.777154 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ed782618-8b69-4456-9aec-5184e765960f-custom-prometheus-ca\") pod \"ed782618-8b69-4456-9aec-5184e765960f\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.777233 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed782618-8b69-4456-9aec-5184e765960f-config-data\") pod \"ed782618-8b69-4456-9aec-5184e765960f\" (UID: \"ed782618-8b69-4456-9aec-5184e765960f\") " Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.781470 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed782618-8b69-4456-9aec-5184e765960f-logs" (OuterVolumeSpecName: "logs") pod "ed782618-8b69-4456-9aec-5184e765960f" (UID: "ed782618-8b69-4456-9aec-5184e765960f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.804652 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed782618-8b69-4456-9aec-5184e765960f-kube-api-access-8d849" (OuterVolumeSpecName: "kube-api-access-8d849") pod "ed782618-8b69-4456-9aec-5184e765960f" (UID: "ed782618-8b69-4456-9aec-5184e765960f"). InnerVolumeSpecName "kube-api-access-8d849". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.809623 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed782618-8b69-4456-9aec-5184e765960f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed782618-8b69-4456-9aec-5184e765960f" (UID: "ed782618-8b69-4456-9aec-5184e765960f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.821699 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed782618-8b69-4456-9aec-5184e765960f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "ed782618-8b69-4456-9aec-5184e765960f" (UID: "ed782618-8b69-4456-9aec-5184e765960f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.848818 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed782618-8b69-4456-9aec-5184e765960f-config-data" (OuterVolumeSpecName: "config-data") pod "ed782618-8b69-4456-9aec-5184e765960f" (UID: "ed782618-8b69-4456-9aec-5184e765960f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.879913 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed782618-8b69-4456-9aec-5184e765960f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.879950 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed782618-8b69-4456-9aec-5184e765960f-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.879960 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d849\" (UniqueName: \"kubernetes.io/projected/ed782618-8b69-4456-9aec-5184e765960f-kube-api-access-8d849\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.879970 4844 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ed782618-8b69-4456-9aec-5184e765960f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:49 crc kubenswrapper[4844]: I0126 13:20:49.879980 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed782618-8b69-4456-9aec-5184e765960f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.390989 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.610649 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"576441ec-c5e3-4312-88c8-b256308a1490","Type":"ContainerStarted","Data":"1cfce4aa81882dac45e9400a76e9aaf98f3fd1f8d285dfb2920a20446fed938c"} Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.611492 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.683523 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.696667 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.700646 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 13:20:50 crc kubenswrapper[4844]: E0126 13:20:50.701036 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed782618-8b69-4456-9aec-5184e765960f" containerName="watcher-decision-engine" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.701050 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed782618-8b69-4456-9aec-5184e765960f" containerName="watcher-decision-engine" Jan 26 13:20:50 crc kubenswrapper[4844]: E0126 13:20:50.701062 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed782618-8b69-4456-9aec-5184e765960f" containerName="watcher-decision-engine" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.701068 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed782618-8b69-4456-9aec-5184e765960f" containerName="watcher-decision-engine" Jan 26 13:20:50 crc kubenswrapper[4844]: E0126 13:20:50.701089 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed782618-8b69-4456-9aec-5184e765960f" containerName="watcher-decision-engine" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.701095 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed782618-8b69-4456-9aec-5184e765960f" containerName="watcher-decision-engine" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.701300 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed782618-8b69-4456-9aec-5184e765960f" containerName="watcher-decision-engine" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.701314 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed782618-8b69-4456-9aec-5184e765960f" containerName="watcher-decision-engine" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.701325 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed782618-8b69-4456-9aec-5184e765960f" containerName="watcher-decision-engine" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.701978 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.719342 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.719797 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.817663 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nppp\" (UniqueName: \"kubernetes.io/projected/fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea-kube-api-access-9nppp\") pod \"watcher-decision-engine-0\" (UID: \"fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.817730 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.817749 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea-config-data\") pod \"watcher-decision-engine-0\" (UID: \"fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.817790 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea-logs\") pod \"watcher-decision-engine-0\" (UID: \"fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.817924 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.919247 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea-logs\") pod \"watcher-decision-engine-0\" (UID: \"fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.919412 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.919511 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nppp\" (UniqueName: \"kubernetes.io/projected/fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea-kube-api-access-9nppp\") pod \"watcher-decision-engine-0\" (UID: \"fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.919582 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.919631 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea-config-data\") pod \"watcher-decision-engine-0\" (UID: \"fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.919704 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea-logs\") pod \"watcher-decision-engine-0\" (UID: \"fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.927816 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.938052 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.938420 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea-config-data\") pod \"watcher-decision-engine-0\" (UID: \"fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:20:50 crc kubenswrapper[4844]: I0126 13:20:50.942080 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nppp\" (UniqueName: \"kubernetes.io/projected/fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea-kube-api-access-9nppp\") pod \"watcher-decision-engine-0\" (UID: \"fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea\") " pod="openstack/watcher-decision-engine-0" Jan 26 13:20:51 crc kubenswrapper[4844]: I0126 13:20:51.042558 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 13:20:51 crc kubenswrapper[4844]: I0126 13:20:51.329866 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed782618-8b69-4456-9aec-5184e765960f" path="/var/lib/kubelet/pods/ed782618-8b69-4456-9aec-5184e765960f/volumes" Jan 26 13:20:51 crc kubenswrapper[4844]: I0126 13:20:51.570016 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 13:20:51 crc kubenswrapper[4844]: I0126 13:20:51.684570 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"576441ec-c5e3-4312-88c8-b256308a1490","Type":"ContainerStarted","Data":"8066f4fd6d4eddc4ce590406d77ef732687b5ac53cc6047c3266cefaee2f5011"} Jan 26 13:20:51 crc kubenswrapper[4844]: I0126 13:20:51.687968 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea","Type":"ContainerStarted","Data":"cb516c91a4229a5652b68a85c5242399e5c4f6dbd3427af6dcfab9367ad3e2ed"} Jan 26 13:20:52 crc kubenswrapper[4844]: I0126 13:20:52.698067 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea","Type":"ContainerStarted","Data":"6ed256bb6d7526319fef691677337221f4b801576a5db853efa0c031c731453b"} Jan 26 13:20:52 crc kubenswrapper[4844]: I0126 13:20:52.703345 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"576441ec-c5e3-4312-88c8-b256308a1490","Type":"ContainerStarted","Data":"b64ffd321b6c8669efd4f31604acf1922e681189003bb437697553411bde6902"} Jan 26 13:20:52 crc kubenswrapper[4844]: I0126 13:20:52.703502 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 13:20:52 crc kubenswrapper[4844]: I0126 13:20:52.740937 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.292989994 podStartE2EDuration="6.740911826s" podCreationTimestamp="2026-01-26 13:20:46 +0000 UTC" firstStartedPulling="2026-01-26 13:20:47.572540387 +0000 UTC m=+2224.505907999" lastFinishedPulling="2026-01-26 13:20:52.020462219 +0000 UTC m=+2228.953829831" observedRunningTime="2026-01-26 13:20:52.733423155 +0000 UTC m=+2229.666790767" watchObservedRunningTime="2026-01-26 13:20:52.740911826 +0000 UTC m=+2229.674279458" Jan 26 13:20:52 crc kubenswrapper[4844]: I0126 13:20:52.741493 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.74148663 podStartE2EDuration="2.74148663s" podCreationTimestamp="2026-01-26 13:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:20:52.718418083 +0000 UTC m=+2229.651785695" watchObservedRunningTime="2026-01-26 13:20:52.74148663 +0000 UTC m=+2229.674854242" Jan 26 13:20:54 crc kubenswrapper[4844]: I0126 13:20:54.532495 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:54 crc kubenswrapper[4844]: I0126 13:20:54.532867 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:54 crc kubenswrapper[4844]: I0126 13:20:54.561370 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:54 crc kubenswrapper[4844]: I0126 13:20:54.572751 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:54 crc kubenswrapper[4844]: I0126 13:20:54.728428 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:54 crc kubenswrapper[4844]: I0126 13:20:54.728642 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:54 crc kubenswrapper[4844]: I0126 13:20:54.904636 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:20:54 crc kubenswrapper[4844]: I0126 13:20:54.904976 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="576441ec-c5e3-4312-88c8-b256308a1490" containerName="ceilometer-central-agent" containerID="cri-o://8adff110338ed64de77938ca2d6e4f93f4d0e5c41bcddabb38c4e67a7306077d" gracePeriod=30 Jan 26 13:20:54 crc kubenswrapper[4844]: I0126 13:20:54.905414 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="576441ec-c5e3-4312-88c8-b256308a1490" containerName="sg-core" containerID="cri-o://8066f4fd6d4eddc4ce590406d77ef732687b5ac53cc6047c3266cefaee2f5011" gracePeriod=30 Jan 26 13:20:54 crc kubenswrapper[4844]: I0126 13:20:54.905448 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="576441ec-c5e3-4312-88c8-b256308a1490" containerName="ceilometer-notification-agent" containerID="cri-o://1cfce4aa81882dac45e9400a76e9aaf98f3fd1f8d285dfb2920a20446fed938c" gracePeriod=30 Jan 26 13:20:54 crc kubenswrapper[4844]: I0126 13:20:54.905423 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="576441ec-c5e3-4312-88c8-b256308a1490" containerName="proxy-httpd" containerID="cri-o://b64ffd321b6c8669efd4f31604acf1922e681189003bb437697553411bde6902" gracePeriod=30 Jan 26 13:20:55 crc kubenswrapper[4844]: I0126 13:20:55.741655 4844 generic.go:334] "Generic (PLEG): container finished" podID="576441ec-c5e3-4312-88c8-b256308a1490" containerID="b64ffd321b6c8669efd4f31604acf1922e681189003bb437697553411bde6902" exitCode=0 Jan 26 13:20:55 crc kubenswrapper[4844]: I0126 13:20:55.741695 4844 generic.go:334] "Generic (PLEG): container finished" podID="576441ec-c5e3-4312-88c8-b256308a1490" containerID="8066f4fd6d4eddc4ce590406d77ef732687b5ac53cc6047c3266cefaee2f5011" exitCode=2 Jan 26 13:20:55 crc kubenswrapper[4844]: I0126 13:20:55.741738 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"576441ec-c5e3-4312-88c8-b256308a1490","Type":"ContainerDied","Data":"b64ffd321b6c8669efd4f31604acf1922e681189003bb437697553411bde6902"} Jan 26 13:20:55 crc kubenswrapper[4844]: I0126 13:20:55.741778 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"576441ec-c5e3-4312-88c8-b256308a1490","Type":"ContainerDied","Data":"8066f4fd6d4eddc4ce590406d77ef732687b5ac53cc6047c3266cefaee2f5011"} Jan 26 13:20:56 crc kubenswrapper[4844]: I0126 13:20:56.753980 4844 generic.go:334] "Generic (PLEG): container finished" podID="576441ec-c5e3-4312-88c8-b256308a1490" containerID="1cfce4aa81882dac45e9400a76e9aaf98f3fd1f8d285dfb2920a20446fed938c" exitCode=0 Jan 26 13:20:56 crc kubenswrapper[4844]: I0126 13:20:56.754059 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"576441ec-c5e3-4312-88c8-b256308a1490","Type":"ContainerDied","Data":"1cfce4aa81882dac45e9400a76e9aaf98f3fd1f8d285dfb2920a20446fed938c"} Jan 26 13:20:57 crc kubenswrapper[4844]: I0126 13:20:57.096776 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:57 crc kubenswrapper[4844]: I0126 13:20:57.096858 4844 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 13:20:57 crc kubenswrapper[4844]: I0126 13:20:57.102715 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 13:20:57 crc kubenswrapper[4844]: I0126 13:20:57.935706 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dhzj8" Jan 26 13:20:57 crc kubenswrapper[4844]: I0126 13:20:57.986011 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dhzj8" Jan 26 13:20:58 crc kubenswrapper[4844]: I0126 13:20:58.171730 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dhzj8"] Jan 26 13:20:59 crc kubenswrapper[4844]: I0126 13:20:59.784250 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dhzj8" podUID="2ac79d59-b04a-45d5-baa7-8370e8c54045" containerName="registry-server" containerID="cri-o://2ccddd78124473e03eec5846d28f30a57dd412904350fef4f0d21741323705f4" gracePeriod=2 Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.042971 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.070892 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.741906 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhzj8" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.795450 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac79d59-b04a-45d5-baa7-8370e8c54045-utilities\") pod \"2ac79d59-b04a-45d5-baa7-8370e8c54045\" (UID: \"2ac79d59-b04a-45d5-baa7-8370e8c54045\") " Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.795566 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac79d59-b04a-45d5-baa7-8370e8c54045-catalog-content\") pod \"2ac79d59-b04a-45d5-baa7-8370e8c54045\" (UID: \"2ac79d59-b04a-45d5-baa7-8370e8c54045\") " Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.795738 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j6vf\" (UniqueName: \"kubernetes.io/projected/2ac79d59-b04a-45d5-baa7-8370e8c54045-kube-api-access-4j6vf\") pod \"2ac79d59-b04a-45d5-baa7-8370e8c54045\" (UID: \"2ac79d59-b04a-45d5-baa7-8370e8c54045\") " Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.796769 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ac79d59-b04a-45d5-baa7-8370e8c54045-utilities" (OuterVolumeSpecName: "utilities") pod "2ac79d59-b04a-45d5-baa7-8370e8c54045" (UID: "2ac79d59-b04a-45d5-baa7-8370e8c54045"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.805204 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ac79d59-b04a-45d5-baa7-8370e8c54045-kube-api-access-4j6vf" (OuterVolumeSpecName: "kube-api-access-4j6vf") pod "2ac79d59-b04a-45d5-baa7-8370e8c54045" (UID: "2ac79d59-b04a-45d5-baa7-8370e8c54045"). InnerVolumeSpecName "kube-api-access-4j6vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.821505 4844 generic.go:334] "Generic (PLEG): container finished" podID="2ac79d59-b04a-45d5-baa7-8370e8c54045" containerID="2ccddd78124473e03eec5846d28f30a57dd412904350fef4f0d21741323705f4" exitCode=0 Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.821620 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhzj8" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.821646 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhzj8" event={"ID":"2ac79d59-b04a-45d5-baa7-8370e8c54045","Type":"ContainerDied","Data":"2ccddd78124473e03eec5846d28f30a57dd412904350fef4f0d21741323705f4"} Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.821703 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhzj8" event={"ID":"2ac79d59-b04a-45d5-baa7-8370e8c54045","Type":"ContainerDied","Data":"9a0ec2170a2f35d753d91e9880770717e4fbe268ccbb569c7649ec5ab8f3fd20"} Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.821724 4844 scope.go:117] "RemoveContainer" containerID="2ccddd78124473e03eec5846d28f30a57dd412904350fef4f0d21741323705f4" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.822368 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.867137 4844 scope.go:117] "RemoveContainer" containerID="8abb7e1a40fd3ef7f24db644caebe67b8179d7754deefcc2450d4ccf17a98cc9" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.868282 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.897900 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac79d59-b04a-45d5-baa7-8370e8c54045-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.897939 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j6vf\" (UniqueName: \"kubernetes.io/projected/2ac79d59-b04a-45d5-baa7-8370e8c54045-kube-api-access-4j6vf\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.901633 4844 scope.go:117] "RemoveContainer" containerID="52be658d096cb5853ae063f965d2cc3a619b06a04614edb455486aa0f0bceced" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.907920 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ac79d59-b04a-45d5-baa7-8370e8c54045-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ac79d59-b04a-45d5-baa7-8370e8c54045" (UID: "2ac79d59-b04a-45d5-baa7-8370e8c54045"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.923250 4844 scope.go:117] "RemoveContainer" containerID="2ccddd78124473e03eec5846d28f30a57dd412904350fef4f0d21741323705f4" Jan 26 13:21:01 crc kubenswrapper[4844]: E0126 13:21:01.923752 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ccddd78124473e03eec5846d28f30a57dd412904350fef4f0d21741323705f4\": container with ID starting with 2ccddd78124473e03eec5846d28f30a57dd412904350fef4f0d21741323705f4 not found: ID does not exist" containerID="2ccddd78124473e03eec5846d28f30a57dd412904350fef4f0d21741323705f4" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.923868 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ccddd78124473e03eec5846d28f30a57dd412904350fef4f0d21741323705f4"} err="failed to get container status \"2ccddd78124473e03eec5846d28f30a57dd412904350fef4f0d21741323705f4\": rpc error: code = NotFound desc = could not find container \"2ccddd78124473e03eec5846d28f30a57dd412904350fef4f0d21741323705f4\": container with ID starting with 2ccddd78124473e03eec5846d28f30a57dd412904350fef4f0d21741323705f4 not found: ID does not exist" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.923949 4844 scope.go:117] "RemoveContainer" containerID="8abb7e1a40fd3ef7f24db644caebe67b8179d7754deefcc2450d4ccf17a98cc9" Jan 26 13:21:01 crc kubenswrapper[4844]: E0126 13:21:01.924842 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8abb7e1a40fd3ef7f24db644caebe67b8179d7754deefcc2450d4ccf17a98cc9\": container with ID starting with 8abb7e1a40fd3ef7f24db644caebe67b8179d7754deefcc2450d4ccf17a98cc9 not found: ID does not exist" containerID="8abb7e1a40fd3ef7f24db644caebe67b8179d7754deefcc2450d4ccf17a98cc9" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.925393 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8abb7e1a40fd3ef7f24db644caebe67b8179d7754deefcc2450d4ccf17a98cc9"} err="failed to get container status \"8abb7e1a40fd3ef7f24db644caebe67b8179d7754deefcc2450d4ccf17a98cc9\": rpc error: code = NotFound desc = could not find container \"8abb7e1a40fd3ef7f24db644caebe67b8179d7754deefcc2450d4ccf17a98cc9\": container with ID starting with 8abb7e1a40fd3ef7f24db644caebe67b8179d7754deefcc2450d4ccf17a98cc9 not found: ID does not exist" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.925478 4844 scope.go:117] "RemoveContainer" containerID="52be658d096cb5853ae063f965d2cc3a619b06a04614edb455486aa0f0bceced" Jan 26 13:21:01 crc kubenswrapper[4844]: E0126 13:21:01.925959 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52be658d096cb5853ae063f965d2cc3a619b06a04614edb455486aa0f0bceced\": container with ID starting with 52be658d096cb5853ae063f965d2cc3a619b06a04614edb455486aa0f0bceced not found: ID does not exist" containerID="52be658d096cb5853ae063f965d2cc3a619b06a04614edb455486aa0f0bceced" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.926068 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52be658d096cb5853ae063f965d2cc3a619b06a04614edb455486aa0f0bceced"} err="failed to get container status \"52be658d096cb5853ae063f965d2cc3a619b06a04614edb455486aa0f0bceced\": rpc error: code = NotFound desc = could not find container \"52be658d096cb5853ae063f965d2cc3a619b06a04614edb455486aa0f0bceced\": container with ID starting with 52be658d096cb5853ae063f965d2cc3a619b06a04614edb455486aa0f0bceced not found: ID does not exist" Jan 26 13:21:01 crc kubenswrapper[4844]: I0126 13:21:01.999781 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac79d59-b04a-45d5-baa7-8370e8c54045-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:02 crc kubenswrapper[4844]: I0126 13:21:02.159461 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dhzj8"] Jan 26 13:21:02 crc kubenswrapper[4844]: I0126 13:21:02.167586 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dhzj8"] Jan 26 13:21:02 crc kubenswrapper[4844]: I0126 13:21:02.833176 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zzp9q" event={"ID":"fe51c360-570b-4e53-9594-271a306efe47","Type":"ContainerStarted","Data":"ceb926bf0aa70465619da3341e9a87d889aa8d8db7ac32233c5911ac147e0e45"} Jan 26 13:21:03 crc kubenswrapper[4844]: I0126 13:21:03.327267 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ac79d59-b04a-45d5-baa7-8370e8c54045" path="/var/lib/kubelet/pods/2ac79d59-b04a-45d5-baa7-8370e8c54045/volumes" Jan 26 13:21:06 crc kubenswrapper[4844]: I0126 13:21:06.365051 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:21:06 crc kubenswrapper[4844]: I0126 13:21:06.365731 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:21:06 crc kubenswrapper[4844]: I0126 13:21:06.365798 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 13:21:06 crc kubenswrapper[4844]: I0126 13:21:06.366846 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:21:06 crc kubenswrapper[4844]: I0126 13:21:06.366924 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" gracePeriod=600 Jan 26 13:21:06 crc kubenswrapper[4844]: E0126 13:21:06.526389 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:21:06 crc kubenswrapper[4844]: I0126 13:21:06.879002 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" exitCode=0 Jan 26 13:21:06 crc kubenswrapper[4844]: I0126 13:21:06.879049 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d"} Jan 26 13:21:06 crc kubenswrapper[4844]: I0126 13:21:06.879084 4844 scope.go:117] "RemoveContainer" containerID="f8d2dd6bfcc6d48828fccc89734d561f1977038b1d62b9cafb05ed3131eb3a4b" Jan 26 13:21:06 crc kubenswrapper[4844]: I0126 13:21:06.879761 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:21:06 crc kubenswrapper[4844]: E0126 13:21:06.880099 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:21:06 crc kubenswrapper[4844]: I0126 13:21:06.902902 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zzp9q" podStartSLOduration=6.549174021 podStartE2EDuration="20.902882652s" podCreationTimestamp="2026-01-26 13:20:46 +0000 UTC" firstStartedPulling="2026-01-26 13:20:47.376558189 +0000 UTC m=+2224.309925801" lastFinishedPulling="2026-01-26 13:21:01.73026682 +0000 UTC m=+2238.663634432" observedRunningTime="2026-01-26 13:21:02.857515963 +0000 UTC m=+2239.790883605" watchObservedRunningTime="2026-01-26 13:21:06.902882652 +0000 UTC m=+2243.836250264" Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.780468 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.890102 4844 generic.go:334] "Generic (PLEG): container finished" podID="576441ec-c5e3-4312-88c8-b256308a1490" containerID="8adff110338ed64de77938ca2d6e4f93f4d0e5c41bcddabb38c4e67a7306077d" exitCode=0 Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.890166 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.890176 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"576441ec-c5e3-4312-88c8-b256308a1490","Type":"ContainerDied","Data":"8adff110338ed64de77938ca2d6e4f93f4d0e5c41bcddabb38c4e67a7306077d"} Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.890204 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"576441ec-c5e3-4312-88c8-b256308a1490","Type":"ContainerDied","Data":"acacf38ee15a562053800146e31b2a4da872ebf1971424f43640881e55beef29"} Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.890224 4844 scope.go:117] "RemoveContainer" containerID="b64ffd321b6c8669efd4f31604acf1922e681189003bb437697553411bde6902" Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.910545 4844 scope.go:117] "RemoveContainer" containerID="8066f4fd6d4eddc4ce590406d77ef732687b5ac53cc6047c3266cefaee2f5011" Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.919994 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-sg-core-conf-yaml\") pod \"576441ec-c5e3-4312-88c8-b256308a1490\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.920072 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vcnb\" (UniqueName: \"kubernetes.io/projected/576441ec-c5e3-4312-88c8-b256308a1490-kube-api-access-2vcnb\") pod \"576441ec-c5e3-4312-88c8-b256308a1490\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.920156 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/576441ec-c5e3-4312-88c8-b256308a1490-run-httpd\") pod \"576441ec-c5e3-4312-88c8-b256308a1490\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.920185 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-scripts\") pod \"576441ec-c5e3-4312-88c8-b256308a1490\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.920219 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-config-data\") pod \"576441ec-c5e3-4312-88c8-b256308a1490\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.920348 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/576441ec-c5e3-4312-88c8-b256308a1490-log-httpd\") pod \"576441ec-c5e3-4312-88c8-b256308a1490\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.920377 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-combined-ca-bundle\") pod \"576441ec-c5e3-4312-88c8-b256308a1490\" (UID: \"576441ec-c5e3-4312-88c8-b256308a1490\") " Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.922007 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/576441ec-c5e3-4312-88c8-b256308a1490-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "576441ec-c5e3-4312-88c8-b256308a1490" (UID: "576441ec-c5e3-4312-88c8-b256308a1490"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.922870 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/576441ec-c5e3-4312-88c8-b256308a1490-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "576441ec-c5e3-4312-88c8-b256308a1490" (UID: "576441ec-c5e3-4312-88c8-b256308a1490"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.926747 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-scripts" (OuterVolumeSpecName: "scripts") pod "576441ec-c5e3-4312-88c8-b256308a1490" (UID: "576441ec-c5e3-4312-88c8-b256308a1490"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.926875 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/576441ec-c5e3-4312-88c8-b256308a1490-kube-api-access-2vcnb" (OuterVolumeSpecName: "kube-api-access-2vcnb") pod "576441ec-c5e3-4312-88c8-b256308a1490" (UID: "576441ec-c5e3-4312-88c8-b256308a1490"). InnerVolumeSpecName "kube-api-access-2vcnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.936857 4844 scope.go:117] "RemoveContainer" containerID="1cfce4aa81882dac45e9400a76e9aaf98f3fd1f8d285dfb2920a20446fed938c" Jan 26 13:21:07 crc kubenswrapper[4844]: I0126 13:21:07.973910 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "576441ec-c5e3-4312-88c8-b256308a1490" (UID: "576441ec-c5e3-4312-88c8-b256308a1490"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.012966 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "576441ec-c5e3-4312-88c8-b256308a1490" (UID: "576441ec-c5e3-4312-88c8-b256308a1490"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.023534 4844 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.023569 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vcnb\" (UniqueName: \"kubernetes.io/projected/576441ec-c5e3-4312-88c8-b256308a1490-kube-api-access-2vcnb\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.023586 4844 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/576441ec-c5e3-4312-88c8-b256308a1490-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.023624 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.023638 4844 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/576441ec-c5e3-4312-88c8-b256308a1490-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.023649 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.061510 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-config-data" (OuterVolumeSpecName: "config-data") pod "576441ec-c5e3-4312-88c8-b256308a1490" (UID: "576441ec-c5e3-4312-88c8-b256308a1490"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.069822 4844 scope.go:117] "RemoveContainer" containerID="8adff110338ed64de77938ca2d6e4f93f4d0e5c41bcddabb38c4e67a7306077d" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.088582 4844 scope.go:117] "RemoveContainer" containerID="b64ffd321b6c8669efd4f31604acf1922e681189003bb437697553411bde6902" Jan 26 13:21:08 crc kubenswrapper[4844]: E0126 13:21:08.089094 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b64ffd321b6c8669efd4f31604acf1922e681189003bb437697553411bde6902\": container with ID starting with b64ffd321b6c8669efd4f31604acf1922e681189003bb437697553411bde6902 not found: ID does not exist" containerID="b64ffd321b6c8669efd4f31604acf1922e681189003bb437697553411bde6902" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.089134 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b64ffd321b6c8669efd4f31604acf1922e681189003bb437697553411bde6902"} err="failed to get container status \"b64ffd321b6c8669efd4f31604acf1922e681189003bb437697553411bde6902\": rpc error: code = NotFound desc = could not find container \"b64ffd321b6c8669efd4f31604acf1922e681189003bb437697553411bde6902\": container with ID starting with b64ffd321b6c8669efd4f31604acf1922e681189003bb437697553411bde6902 not found: ID does not exist" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.089171 4844 scope.go:117] "RemoveContainer" containerID="8066f4fd6d4eddc4ce590406d77ef732687b5ac53cc6047c3266cefaee2f5011" Jan 26 13:21:08 crc kubenswrapper[4844]: E0126 13:21:08.089534 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8066f4fd6d4eddc4ce590406d77ef732687b5ac53cc6047c3266cefaee2f5011\": container with ID starting with 8066f4fd6d4eddc4ce590406d77ef732687b5ac53cc6047c3266cefaee2f5011 not found: ID does not exist" containerID="8066f4fd6d4eddc4ce590406d77ef732687b5ac53cc6047c3266cefaee2f5011" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.089609 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8066f4fd6d4eddc4ce590406d77ef732687b5ac53cc6047c3266cefaee2f5011"} err="failed to get container status \"8066f4fd6d4eddc4ce590406d77ef732687b5ac53cc6047c3266cefaee2f5011\": rpc error: code = NotFound desc = could not find container \"8066f4fd6d4eddc4ce590406d77ef732687b5ac53cc6047c3266cefaee2f5011\": container with ID starting with 8066f4fd6d4eddc4ce590406d77ef732687b5ac53cc6047c3266cefaee2f5011 not found: ID does not exist" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.089645 4844 scope.go:117] "RemoveContainer" containerID="1cfce4aa81882dac45e9400a76e9aaf98f3fd1f8d285dfb2920a20446fed938c" Jan 26 13:21:08 crc kubenswrapper[4844]: E0126 13:21:08.089940 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cfce4aa81882dac45e9400a76e9aaf98f3fd1f8d285dfb2920a20446fed938c\": container with ID starting with 1cfce4aa81882dac45e9400a76e9aaf98f3fd1f8d285dfb2920a20446fed938c not found: ID does not exist" containerID="1cfce4aa81882dac45e9400a76e9aaf98f3fd1f8d285dfb2920a20446fed938c" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.089969 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cfce4aa81882dac45e9400a76e9aaf98f3fd1f8d285dfb2920a20446fed938c"} err="failed to get container status \"1cfce4aa81882dac45e9400a76e9aaf98f3fd1f8d285dfb2920a20446fed938c\": rpc error: code = NotFound desc = could not find container \"1cfce4aa81882dac45e9400a76e9aaf98f3fd1f8d285dfb2920a20446fed938c\": container with ID starting with 1cfce4aa81882dac45e9400a76e9aaf98f3fd1f8d285dfb2920a20446fed938c not found: ID does not exist" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.089989 4844 scope.go:117] "RemoveContainer" containerID="8adff110338ed64de77938ca2d6e4f93f4d0e5c41bcddabb38c4e67a7306077d" Jan 26 13:21:08 crc kubenswrapper[4844]: E0126 13:21:08.090181 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8adff110338ed64de77938ca2d6e4f93f4d0e5c41bcddabb38c4e67a7306077d\": container with ID starting with 8adff110338ed64de77938ca2d6e4f93f4d0e5c41bcddabb38c4e67a7306077d not found: ID does not exist" containerID="8adff110338ed64de77938ca2d6e4f93f4d0e5c41bcddabb38c4e67a7306077d" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.090205 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8adff110338ed64de77938ca2d6e4f93f4d0e5c41bcddabb38c4e67a7306077d"} err="failed to get container status \"8adff110338ed64de77938ca2d6e4f93f4d0e5c41bcddabb38c4e67a7306077d\": rpc error: code = NotFound desc = could not find container \"8adff110338ed64de77938ca2d6e4f93f4d0e5c41bcddabb38c4e67a7306077d\": container with ID starting with 8adff110338ed64de77938ca2d6e4f93f4d0e5c41bcddabb38c4e67a7306077d not found: ID does not exist" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.125653 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576441ec-c5e3-4312-88c8-b256308a1490-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.234645 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.239981 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.289054 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:21:08 crc kubenswrapper[4844]: E0126 13:21:08.289672 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576441ec-c5e3-4312-88c8-b256308a1490" containerName="ceilometer-central-agent" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.289687 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="576441ec-c5e3-4312-88c8-b256308a1490" containerName="ceilometer-central-agent" Jan 26 13:21:08 crc kubenswrapper[4844]: E0126 13:21:08.289706 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576441ec-c5e3-4312-88c8-b256308a1490" containerName="ceilometer-notification-agent" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.289711 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="576441ec-c5e3-4312-88c8-b256308a1490" containerName="ceilometer-notification-agent" Jan 26 13:21:08 crc kubenswrapper[4844]: E0126 13:21:08.289727 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac79d59-b04a-45d5-baa7-8370e8c54045" containerName="extract-content" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.289734 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac79d59-b04a-45d5-baa7-8370e8c54045" containerName="extract-content" Jan 26 13:21:08 crc kubenswrapper[4844]: E0126 13:21:08.289746 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed782618-8b69-4456-9aec-5184e765960f" containerName="watcher-decision-engine" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.289752 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed782618-8b69-4456-9aec-5184e765960f" containerName="watcher-decision-engine" Jan 26 13:21:08 crc kubenswrapper[4844]: E0126 13:21:08.289771 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac79d59-b04a-45d5-baa7-8370e8c54045" containerName="registry-server" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.289777 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac79d59-b04a-45d5-baa7-8370e8c54045" containerName="registry-server" Jan 26 13:21:08 crc kubenswrapper[4844]: E0126 13:21:08.289789 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576441ec-c5e3-4312-88c8-b256308a1490" containerName="sg-core" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.289795 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="576441ec-c5e3-4312-88c8-b256308a1490" containerName="sg-core" Jan 26 13:21:08 crc kubenswrapper[4844]: E0126 13:21:08.289807 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac79d59-b04a-45d5-baa7-8370e8c54045" containerName="extract-utilities" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.289813 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac79d59-b04a-45d5-baa7-8370e8c54045" containerName="extract-utilities" Jan 26 13:21:08 crc kubenswrapper[4844]: E0126 13:21:08.289839 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576441ec-c5e3-4312-88c8-b256308a1490" containerName="proxy-httpd" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.289848 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="576441ec-c5e3-4312-88c8-b256308a1490" containerName="proxy-httpd" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.290177 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed782618-8b69-4456-9aec-5184e765960f" containerName="watcher-decision-engine" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.290201 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="576441ec-c5e3-4312-88c8-b256308a1490" containerName="sg-core" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.290219 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="576441ec-c5e3-4312-88c8-b256308a1490" containerName="ceilometer-central-agent" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.290228 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="576441ec-c5e3-4312-88c8-b256308a1490" containerName="ceilometer-notification-agent" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.290249 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="576441ec-c5e3-4312-88c8-b256308a1490" containerName="proxy-httpd" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.290260 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac79d59-b04a-45d5-baa7-8370e8c54045" containerName="registry-server" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.309540 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.312289 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.312368 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.315566 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.437869 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trtcb\" (UniqueName: \"kubernetes.io/projected/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-kube-api-access-trtcb\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.437985 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.438031 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-config-data\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.438074 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-log-httpd\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.438144 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.438232 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-scripts\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.438275 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-run-httpd\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.539479 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.539541 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-config-data\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.539591 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-log-httpd\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.540148 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-log-httpd\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.540183 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.540237 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-scripts\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.540270 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-run-httpd\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.540335 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trtcb\" (UniqueName: \"kubernetes.io/projected/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-kube-api-access-trtcb\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.540799 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-run-httpd\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.543727 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.544852 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-scripts\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.545041 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-config-data\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.545352 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.560303 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trtcb\" (UniqueName: \"kubernetes.io/projected/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-kube-api-access-trtcb\") pod \"ceilometer-0\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " pod="openstack/ceilometer-0" Jan 26 13:21:08 crc kubenswrapper[4844]: I0126 13:21:08.651255 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:21:09 crc kubenswrapper[4844]: I0126 13:21:09.136646 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:21:09 crc kubenswrapper[4844]: W0126 13:21:09.142166 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd78e47b5_12d6_478c_b2a2_d91bc69b8f50.slice/crio-bf55ea96c1fc4902ccc58f56a679e85df64858ee91e57e97656dbfb5859cb081 WatchSource:0}: Error finding container bf55ea96c1fc4902ccc58f56a679e85df64858ee91e57e97656dbfb5859cb081: Status 404 returned error can't find the container with id bf55ea96c1fc4902ccc58f56a679e85df64858ee91e57e97656dbfb5859cb081 Jan 26 13:21:09 crc kubenswrapper[4844]: I0126 13:21:09.323168 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="576441ec-c5e3-4312-88c8-b256308a1490" path="/var/lib/kubelet/pods/576441ec-c5e3-4312-88c8-b256308a1490/volumes" Jan 26 13:21:09 crc kubenswrapper[4844]: I0126 13:21:09.914675 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d78e47b5-12d6-478c-b2a2-d91bc69b8f50","Type":"ContainerStarted","Data":"3118fa943b250b1231471024a908d75694a53b258faef45c35bc2e96267a10b1"} Jan 26 13:21:09 crc kubenswrapper[4844]: I0126 13:21:09.914949 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d78e47b5-12d6-478c-b2a2-d91bc69b8f50","Type":"ContainerStarted","Data":"bf55ea96c1fc4902ccc58f56a679e85df64858ee91e57e97656dbfb5859cb081"} Jan 26 13:21:10 crc kubenswrapper[4844]: I0126 13:21:10.297396 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:21:10 crc kubenswrapper[4844]: I0126 13:21:10.925564 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d78e47b5-12d6-478c-b2a2-d91bc69b8f50","Type":"ContainerStarted","Data":"d6df5db3eb2bd9c0aeafe670e4458a3dbee7bfb81820c76afd08e856532649ae"} Jan 26 13:21:11 crc kubenswrapper[4844]: I0126 13:21:11.939181 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d78e47b5-12d6-478c-b2a2-d91bc69b8f50","Type":"ContainerStarted","Data":"4887364b55ba69365ed97852a718002e2c9b79940f52e766545a563f07e2f238"} Jan 26 13:21:12 crc kubenswrapper[4844]: I0126 13:21:12.951343 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d78e47b5-12d6-478c-b2a2-d91bc69b8f50","Type":"ContainerStarted","Data":"cd06c45bbd3038f725bb531661bcec859316a27827ba75a2599fcfc0b652daad"} Jan 26 13:21:12 crc kubenswrapper[4844]: I0126 13:21:12.951567 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerName="ceilometer-central-agent" containerID="cri-o://3118fa943b250b1231471024a908d75694a53b258faef45c35bc2e96267a10b1" gracePeriod=30 Jan 26 13:21:12 crc kubenswrapper[4844]: I0126 13:21:12.951859 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerName="sg-core" containerID="cri-o://4887364b55ba69365ed97852a718002e2c9b79940f52e766545a563f07e2f238" gracePeriod=30 Jan 26 13:21:12 crc kubenswrapper[4844]: I0126 13:21:12.951978 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerName="proxy-httpd" containerID="cri-o://cd06c45bbd3038f725bb531661bcec859316a27827ba75a2599fcfc0b652daad" gracePeriod=30 Jan 26 13:21:12 crc kubenswrapper[4844]: I0126 13:21:12.952018 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 13:21:12 crc kubenswrapper[4844]: I0126 13:21:12.952064 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerName="ceilometer-notification-agent" containerID="cri-o://d6df5db3eb2bd9c0aeafe670e4458a3dbee7bfb81820c76afd08e856532649ae" gracePeriod=30 Jan 26 13:21:13 crc kubenswrapper[4844]: I0126 13:21:13.003084 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.687236266 podStartE2EDuration="5.003063259s" podCreationTimestamp="2026-01-26 13:21:08 +0000 UTC" firstStartedPulling="2026-01-26 13:21:09.145295254 +0000 UTC m=+2246.078662856" lastFinishedPulling="2026-01-26 13:21:12.461122237 +0000 UTC m=+2249.394489849" observedRunningTime="2026-01-26 13:21:12.993382974 +0000 UTC m=+2249.926750586" watchObservedRunningTime="2026-01-26 13:21:13.003063259 +0000 UTC m=+2249.936430871" Jan 26 13:21:13 crc kubenswrapper[4844]: I0126 13:21:13.967063 4844 generic.go:334] "Generic (PLEG): container finished" podID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerID="cd06c45bbd3038f725bb531661bcec859316a27827ba75a2599fcfc0b652daad" exitCode=0 Jan 26 13:21:13 crc kubenswrapper[4844]: I0126 13:21:13.967362 4844 generic.go:334] "Generic (PLEG): container finished" podID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerID="4887364b55ba69365ed97852a718002e2c9b79940f52e766545a563f07e2f238" exitCode=2 Jan 26 13:21:13 crc kubenswrapper[4844]: I0126 13:21:13.967379 4844 generic.go:334] "Generic (PLEG): container finished" podID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerID="d6df5db3eb2bd9c0aeafe670e4458a3dbee7bfb81820c76afd08e856532649ae" exitCode=0 Jan 26 13:21:13 crc kubenswrapper[4844]: I0126 13:21:13.967104 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d78e47b5-12d6-478c-b2a2-d91bc69b8f50","Type":"ContainerDied","Data":"cd06c45bbd3038f725bb531661bcec859316a27827ba75a2599fcfc0b652daad"} Jan 26 13:21:13 crc kubenswrapper[4844]: I0126 13:21:13.967422 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d78e47b5-12d6-478c-b2a2-d91bc69b8f50","Type":"ContainerDied","Data":"4887364b55ba69365ed97852a718002e2c9b79940f52e766545a563f07e2f238"} Jan 26 13:21:13 crc kubenswrapper[4844]: I0126 13:21:13.967441 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d78e47b5-12d6-478c-b2a2-d91bc69b8f50","Type":"ContainerDied","Data":"d6df5db3eb2bd9c0aeafe670e4458a3dbee7bfb81820c76afd08e856532649ae"} Jan 26 13:21:18 crc kubenswrapper[4844]: I0126 13:21:18.008699 4844 generic.go:334] "Generic (PLEG): container finished" podID="fe51c360-570b-4e53-9594-271a306efe47" containerID="ceb926bf0aa70465619da3341e9a87d889aa8d8db7ac32233c5911ac147e0e45" exitCode=0 Jan 26 13:21:18 crc kubenswrapper[4844]: I0126 13:21:18.008768 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zzp9q" event={"ID":"fe51c360-570b-4e53-9594-271a306efe47","Type":"ContainerDied","Data":"ceb926bf0aa70465619da3341e9a87d889aa8d8db7ac32233c5911ac147e0e45"} Jan 26 13:21:19 crc kubenswrapper[4844]: I0126 13:21:19.428308 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zzp9q" Jan 26 13:21:19 crc kubenswrapper[4844]: I0126 13:21:19.460495 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe51c360-570b-4e53-9594-271a306efe47-scripts\") pod \"fe51c360-570b-4e53-9594-271a306efe47\" (UID: \"fe51c360-570b-4e53-9594-271a306efe47\") " Jan 26 13:21:19 crc kubenswrapper[4844]: I0126 13:21:19.460552 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe51c360-570b-4e53-9594-271a306efe47-combined-ca-bundle\") pod \"fe51c360-570b-4e53-9594-271a306efe47\" (UID: \"fe51c360-570b-4e53-9594-271a306efe47\") " Jan 26 13:21:19 crc kubenswrapper[4844]: I0126 13:21:19.460648 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng94v\" (UniqueName: \"kubernetes.io/projected/fe51c360-570b-4e53-9594-271a306efe47-kube-api-access-ng94v\") pod \"fe51c360-570b-4e53-9594-271a306efe47\" (UID: \"fe51c360-570b-4e53-9594-271a306efe47\") " Jan 26 13:21:19 crc kubenswrapper[4844]: I0126 13:21:19.460776 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe51c360-570b-4e53-9594-271a306efe47-config-data\") pod \"fe51c360-570b-4e53-9594-271a306efe47\" (UID: \"fe51c360-570b-4e53-9594-271a306efe47\") " Jan 26 13:21:19 crc kubenswrapper[4844]: I0126 13:21:19.473789 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe51c360-570b-4e53-9594-271a306efe47-scripts" (OuterVolumeSpecName: "scripts") pod "fe51c360-570b-4e53-9594-271a306efe47" (UID: "fe51c360-570b-4e53-9594-271a306efe47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:19 crc kubenswrapper[4844]: I0126 13:21:19.483691 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe51c360-570b-4e53-9594-271a306efe47-kube-api-access-ng94v" (OuterVolumeSpecName: "kube-api-access-ng94v") pod "fe51c360-570b-4e53-9594-271a306efe47" (UID: "fe51c360-570b-4e53-9594-271a306efe47"). InnerVolumeSpecName "kube-api-access-ng94v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:21:19 crc kubenswrapper[4844]: I0126 13:21:19.508248 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe51c360-570b-4e53-9594-271a306efe47-config-data" (OuterVolumeSpecName: "config-data") pod "fe51c360-570b-4e53-9594-271a306efe47" (UID: "fe51c360-570b-4e53-9594-271a306efe47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:19 crc kubenswrapper[4844]: I0126 13:21:19.510838 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe51c360-570b-4e53-9594-271a306efe47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe51c360-570b-4e53-9594-271a306efe47" (UID: "fe51c360-570b-4e53-9594-271a306efe47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:19 crc kubenswrapper[4844]: I0126 13:21:19.573610 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe51c360-570b-4e53-9594-271a306efe47-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:19 crc kubenswrapper[4844]: I0126 13:21:19.573642 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe51c360-570b-4e53-9594-271a306efe47-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:19 crc kubenswrapper[4844]: I0126 13:21:19.573650 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe51c360-570b-4e53-9594-271a306efe47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:19 crc kubenswrapper[4844]: I0126 13:21:19.573662 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng94v\" (UniqueName: \"kubernetes.io/projected/fe51c360-570b-4e53-9594-271a306efe47-kube-api-access-ng94v\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.035288 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zzp9q" event={"ID":"fe51c360-570b-4e53-9594-271a306efe47","Type":"ContainerDied","Data":"d4744f9760f0f8aa4503b2ea1313753de6f3bd1e34e13f6335be7171a7a7c6f6"} Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.035342 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4744f9760f0f8aa4503b2ea1313753de6f3bd1e34e13f6335be7171a7a7c6f6" Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.035400 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zzp9q" Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.614161 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 13:21:20 crc kubenswrapper[4844]: E0126 13:21:20.614956 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe51c360-570b-4e53-9594-271a306efe47" containerName="nova-cell0-conductor-db-sync" Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.614970 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe51c360-570b-4e53-9594-271a306efe47" containerName="nova-cell0-conductor-db-sync" Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.615235 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe51c360-570b-4e53-9594-271a306efe47" containerName="nova-cell0-conductor-db-sync" Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.615972 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.618528 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.620174 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-gbg8x" Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.632032 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.693364 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aa738a6-8d60-4c39-aa86-dc27720dc883-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1aa738a6-8d60-4c39-aa86-dc27720dc883\") " pod="openstack/nova-cell0-conductor-0" Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.693586 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aa738a6-8d60-4c39-aa86-dc27720dc883-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1aa738a6-8d60-4c39-aa86-dc27720dc883\") " pod="openstack/nova-cell0-conductor-0" Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.693649 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggsb2\" (UniqueName: \"kubernetes.io/projected/1aa738a6-8d60-4c39-aa86-dc27720dc883-kube-api-access-ggsb2\") pod \"nova-cell0-conductor-0\" (UID: \"1aa738a6-8d60-4c39-aa86-dc27720dc883\") " pod="openstack/nova-cell0-conductor-0" Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.795179 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aa738a6-8d60-4c39-aa86-dc27720dc883-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1aa738a6-8d60-4c39-aa86-dc27720dc883\") " pod="openstack/nova-cell0-conductor-0" Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.795275 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggsb2\" (UniqueName: \"kubernetes.io/projected/1aa738a6-8d60-4c39-aa86-dc27720dc883-kube-api-access-ggsb2\") pod \"nova-cell0-conductor-0\" (UID: \"1aa738a6-8d60-4c39-aa86-dc27720dc883\") " pod="openstack/nova-cell0-conductor-0" Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.795386 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aa738a6-8d60-4c39-aa86-dc27720dc883-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1aa738a6-8d60-4c39-aa86-dc27720dc883\") " pod="openstack/nova-cell0-conductor-0" Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.800519 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aa738a6-8d60-4c39-aa86-dc27720dc883-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1aa738a6-8d60-4c39-aa86-dc27720dc883\") " pod="openstack/nova-cell0-conductor-0" Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.801103 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aa738a6-8d60-4c39-aa86-dc27720dc883-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1aa738a6-8d60-4c39-aa86-dc27720dc883\") " pod="openstack/nova-cell0-conductor-0" Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.813027 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggsb2\" (UniqueName: \"kubernetes.io/projected/1aa738a6-8d60-4c39-aa86-dc27720dc883-kube-api-access-ggsb2\") pod \"nova-cell0-conductor-0\" (UID: \"1aa738a6-8d60-4c39-aa86-dc27720dc883\") " pod="openstack/nova-cell0-conductor-0" Jan 26 13:21:20 crc kubenswrapper[4844]: I0126 13:21:20.936484 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.314510 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:21:21 crc kubenswrapper[4844]: E0126 13:21:21.315029 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:21:21 crc kubenswrapper[4844]: W0126 13:21:21.457818 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1aa738a6_8d60_4c39_aa86_dc27720dc883.slice/crio-87308c99e95bd29b67dcb3280717e47fa9371c2a36cd3dd7d9d67e76f7604fe2 WatchSource:0}: Error finding container 87308c99e95bd29b67dcb3280717e47fa9371c2a36cd3dd7d9d67e76f7604fe2: Status 404 returned error can't find the container with id 87308c99e95bd29b67dcb3280717e47fa9371c2a36cd3dd7d9d67e76f7604fe2 Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.461367 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.758277 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.814364 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-config-data\") pod \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.814506 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-sg-core-conf-yaml\") pod \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.814523 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-combined-ca-bundle\") pod \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.814549 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-scripts\") pod \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.814583 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trtcb\" (UniqueName: \"kubernetes.io/projected/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-kube-api-access-trtcb\") pod \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.814629 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-log-httpd\") pod \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.814686 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-run-httpd\") pod \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\" (UID: \"d78e47b5-12d6-478c-b2a2-d91bc69b8f50\") " Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.815453 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d78e47b5-12d6-478c-b2a2-d91bc69b8f50" (UID: "d78e47b5-12d6-478c-b2a2-d91bc69b8f50"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.817116 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d78e47b5-12d6-478c-b2a2-d91bc69b8f50" (UID: "d78e47b5-12d6-478c-b2a2-d91bc69b8f50"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.821046 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-scripts" (OuterVolumeSpecName: "scripts") pod "d78e47b5-12d6-478c-b2a2-d91bc69b8f50" (UID: "d78e47b5-12d6-478c-b2a2-d91bc69b8f50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.821790 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-kube-api-access-trtcb" (OuterVolumeSpecName: "kube-api-access-trtcb") pod "d78e47b5-12d6-478c-b2a2-d91bc69b8f50" (UID: "d78e47b5-12d6-478c-b2a2-d91bc69b8f50"). InnerVolumeSpecName "kube-api-access-trtcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.860434 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d78e47b5-12d6-478c-b2a2-d91bc69b8f50" (UID: "d78e47b5-12d6-478c-b2a2-d91bc69b8f50"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.894860 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d78e47b5-12d6-478c-b2a2-d91bc69b8f50" (UID: "d78e47b5-12d6-478c-b2a2-d91bc69b8f50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.916974 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trtcb\" (UniqueName: \"kubernetes.io/projected/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-kube-api-access-trtcb\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.917004 4844 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.917014 4844 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.917022 4844 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.917031 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.917039 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:21 crc kubenswrapper[4844]: I0126 13:21:21.918043 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-config-data" (OuterVolumeSpecName: "config-data") pod "d78e47b5-12d6-478c-b2a2-d91bc69b8f50" (UID: "d78e47b5-12d6-478c-b2a2-d91bc69b8f50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.019223 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78e47b5-12d6-478c-b2a2-d91bc69b8f50-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.060791 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1aa738a6-8d60-4c39-aa86-dc27720dc883","Type":"ContainerStarted","Data":"5264064203f12a8c4c0a8d95f66a8bdba6fe1d9af468d86e16eb3fa0009e75ec"} Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.062264 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1aa738a6-8d60-4c39-aa86-dc27720dc883","Type":"ContainerStarted","Data":"87308c99e95bd29b67dcb3280717e47fa9371c2a36cd3dd7d9d67e76f7604fe2"} Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.062550 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.065228 4844 generic.go:334] "Generic (PLEG): container finished" podID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerID="3118fa943b250b1231471024a908d75694a53b258faef45c35bc2e96267a10b1" exitCode=0 Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.065270 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d78e47b5-12d6-478c-b2a2-d91bc69b8f50","Type":"ContainerDied","Data":"3118fa943b250b1231471024a908d75694a53b258faef45c35bc2e96267a10b1"} Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.065309 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d78e47b5-12d6-478c-b2a2-d91bc69b8f50","Type":"ContainerDied","Data":"bf55ea96c1fc4902ccc58f56a679e85df64858ee91e57e97656dbfb5859cb081"} Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.065327 4844 scope.go:117] "RemoveContainer" containerID="cd06c45bbd3038f725bb531661bcec859316a27827ba75a2599fcfc0b652daad" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.065451 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.087799 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.087777509 podStartE2EDuration="2.087777509s" podCreationTimestamp="2026-01-26 13:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:21:22.079846397 +0000 UTC m=+2259.013214029" watchObservedRunningTime="2026-01-26 13:21:22.087777509 +0000 UTC m=+2259.021145131" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.094858 4844 scope.go:117] "RemoveContainer" containerID="4887364b55ba69365ed97852a718002e2c9b79940f52e766545a563f07e2f238" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.133862 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.158765 4844 scope.go:117] "RemoveContainer" containerID="d6df5db3eb2bd9c0aeafe670e4458a3dbee7bfb81820c76afd08e856532649ae" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.168801 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.182985 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:21:22 crc kubenswrapper[4844]: E0126 13:21:22.183357 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerName="ceilometer-central-agent" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.183378 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerName="ceilometer-central-agent" Jan 26 13:21:22 crc kubenswrapper[4844]: E0126 13:21:22.183396 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerName="proxy-httpd" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.183403 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerName="proxy-httpd" Jan 26 13:21:22 crc kubenswrapper[4844]: E0126 13:21:22.183416 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerName="sg-core" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.183422 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerName="sg-core" Jan 26 13:21:22 crc kubenswrapper[4844]: E0126 13:21:22.183434 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerName="ceilometer-notification-agent" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.183440 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerName="ceilometer-notification-agent" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.183625 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerName="ceilometer-central-agent" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.183645 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerName="sg-core" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.183655 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerName="proxy-httpd" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.183666 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" containerName="ceilometer-notification-agent" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.185280 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.193456 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.196824 4844 scope.go:117] "RemoveContainer" containerID="3118fa943b250b1231471024a908d75694a53b258faef45c35bc2e96267a10b1" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.197087 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.197284 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.225314 4844 scope.go:117] "RemoveContainer" containerID="cd06c45bbd3038f725bb531661bcec859316a27827ba75a2599fcfc0b652daad" Jan 26 13:21:22 crc kubenswrapper[4844]: E0126 13:21:22.230713 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd06c45bbd3038f725bb531661bcec859316a27827ba75a2599fcfc0b652daad\": container with ID starting with cd06c45bbd3038f725bb531661bcec859316a27827ba75a2599fcfc0b652daad not found: ID does not exist" containerID="cd06c45bbd3038f725bb531661bcec859316a27827ba75a2599fcfc0b652daad" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.230753 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd06c45bbd3038f725bb531661bcec859316a27827ba75a2599fcfc0b652daad"} err="failed to get container status \"cd06c45bbd3038f725bb531661bcec859316a27827ba75a2599fcfc0b652daad\": rpc error: code = NotFound desc = could not find container \"cd06c45bbd3038f725bb531661bcec859316a27827ba75a2599fcfc0b652daad\": container with ID starting with cd06c45bbd3038f725bb531661bcec859316a27827ba75a2599fcfc0b652daad not found: ID does not exist" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.230777 4844 scope.go:117] "RemoveContainer" containerID="4887364b55ba69365ed97852a718002e2c9b79940f52e766545a563f07e2f238" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.231885 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bntr8\" (UniqueName: \"kubernetes.io/projected/efd11250-36c0-4291-ae37-a0eff8a1e853-kube-api-access-bntr8\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.231969 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.232031 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-config-data\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.232056 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-scripts\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.232097 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efd11250-36c0-4291-ae37-a0eff8a1e853-run-httpd\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.232127 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.232223 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efd11250-36c0-4291-ae37-a0eff8a1e853-log-httpd\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: E0126 13:21:22.232814 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4887364b55ba69365ed97852a718002e2c9b79940f52e766545a563f07e2f238\": container with ID starting with 4887364b55ba69365ed97852a718002e2c9b79940f52e766545a563f07e2f238 not found: ID does not exist" containerID="4887364b55ba69365ed97852a718002e2c9b79940f52e766545a563f07e2f238" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.232876 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4887364b55ba69365ed97852a718002e2c9b79940f52e766545a563f07e2f238"} err="failed to get container status \"4887364b55ba69365ed97852a718002e2c9b79940f52e766545a563f07e2f238\": rpc error: code = NotFound desc = could not find container \"4887364b55ba69365ed97852a718002e2c9b79940f52e766545a563f07e2f238\": container with ID starting with 4887364b55ba69365ed97852a718002e2c9b79940f52e766545a563f07e2f238 not found: ID does not exist" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.232912 4844 scope.go:117] "RemoveContainer" containerID="d6df5db3eb2bd9c0aeafe670e4458a3dbee7bfb81820c76afd08e856532649ae" Jan 26 13:21:22 crc kubenswrapper[4844]: E0126 13:21:22.233411 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6df5db3eb2bd9c0aeafe670e4458a3dbee7bfb81820c76afd08e856532649ae\": container with ID starting with d6df5db3eb2bd9c0aeafe670e4458a3dbee7bfb81820c76afd08e856532649ae not found: ID does not exist" containerID="d6df5db3eb2bd9c0aeafe670e4458a3dbee7bfb81820c76afd08e856532649ae" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.233463 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6df5db3eb2bd9c0aeafe670e4458a3dbee7bfb81820c76afd08e856532649ae"} err="failed to get container status \"d6df5db3eb2bd9c0aeafe670e4458a3dbee7bfb81820c76afd08e856532649ae\": rpc error: code = NotFound desc = could not find container \"d6df5db3eb2bd9c0aeafe670e4458a3dbee7bfb81820c76afd08e856532649ae\": container with ID starting with d6df5db3eb2bd9c0aeafe670e4458a3dbee7bfb81820c76afd08e856532649ae not found: ID does not exist" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.233497 4844 scope.go:117] "RemoveContainer" containerID="3118fa943b250b1231471024a908d75694a53b258faef45c35bc2e96267a10b1" Jan 26 13:21:22 crc kubenswrapper[4844]: E0126 13:21:22.233917 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3118fa943b250b1231471024a908d75694a53b258faef45c35bc2e96267a10b1\": container with ID starting with 3118fa943b250b1231471024a908d75694a53b258faef45c35bc2e96267a10b1 not found: ID does not exist" containerID="3118fa943b250b1231471024a908d75694a53b258faef45c35bc2e96267a10b1" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.233949 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3118fa943b250b1231471024a908d75694a53b258faef45c35bc2e96267a10b1"} err="failed to get container status \"3118fa943b250b1231471024a908d75694a53b258faef45c35bc2e96267a10b1\": rpc error: code = NotFound desc = could not find container \"3118fa943b250b1231471024a908d75694a53b258faef45c35bc2e96267a10b1\": container with ID starting with 3118fa943b250b1231471024a908d75694a53b258faef45c35bc2e96267a10b1 not found: ID does not exist" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.336765 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.336864 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-config-data\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.336887 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-scripts\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.336926 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efd11250-36c0-4291-ae37-a0eff8a1e853-run-httpd\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.336955 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.337063 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efd11250-36c0-4291-ae37-a0eff8a1e853-log-httpd\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.337129 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bntr8\" (UniqueName: \"kubernetes.io/projected/efd11250-36c0-4291-ae37-a0eff8a1e853-kube-api-access-bntr8\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.343095 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efd11250-36c0-4291-ae37-a0eff8a1e853-run-httpd\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.347331 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.349996 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efd11250-36c0-4291-ae37-a0eff8a1e853-log-httpd\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.359425 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-scripts\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.359769 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-config-data\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.364130 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.378674 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bntr8\" (UniqueName: \"kubernetes.io/projected/efd11250-36c0-4291-ae37-a0eff8a1e853-kube-api-access-bntr8\") pod \"ceilometer-0\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.513726 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:21:22 crc kubenswrapper[4844]: I0126 13:21:22.962759 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:21:22 crc kubenswrapper[4844]: W0126 13:21:22.964386 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefd11250_36c0_4291_ae37_a0eff8a1e853.slice/crio-088a5596c25c6cc4474a0399fb932a96fcf26df2be300c35b8fcb3bf81c10705 WatchSource:0}: Error finding container 088a5596c25c6cc4474a0399fb932a96fcf26df2be300c35b8fcb3bf81c10705: Status 404 returned error can't find the container with id 088a5596c25c6cc4474a0399fb932a96fcf26df2be300c35b8fcb3bf81c10705 Jan 26 13:21:23 crc kubenswrapper[4844]: I0126 13:21:23.074265 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efd11250-36c0-4291-ae37-a0eff8a1e853","Type":"ContainerStarted","Data":"088a5596c25c6cc4474a0399fb932a96fcf26df2be300c35b8fcb3bf81c10705"} Jan 26 13:21:23 crc kubenswrapper[4844]: I0126 13:21:23.324876 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d78e47b5-12d6-478c-b2a2-d91bc69b8f50" path="/var/lib/kubelet/pods/d78e47b5-12d6-478c-b2a2-d91bc69b8f50/volumes" Jan 26 13:21:24 crc kubenswrapper[4844]: I0126 13:21:24.086064 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efd11250-36c0-4291-ae37-a0eff8a1e853","Type":"ContainerStarted","Data":"e740d4612080ca7e7c80b58e745697ad80301ea855a5cc20174740ca8697de92"} Jan 26 13:21:24 crc kubenswrapper[4844]: I0126 13:21:24.087047 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efd11250-36c0-4291-ae37-a0eff8a1e853","Type":"ContainerStarted","Data":"a6117e93c311a91ac0c3f0448577875f8112c1e54362b732040523d2c96c8957"} Jan 26 13:21:25 crc kubenswrapper[4844]: I0126 13:21:25.097701 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efd11250-36c0-4291-ae37-a0eff8a1e853","Type":"ContainerStarted","Data":"c1ed1eb2958da8b781377498f54742acfbbdca6b168adfbfdebb7008a37f608e"} Jan 26 13:21:27 crc kubenswrapper[4844]: I0126 13:21:27.129800 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efd11250-36c0-4291-ae37-a0eff8a1e853","Type":"ContainerStarted","Data":"95f2d2c135501c1f665fcb870a8c0fed4f84e5a91728c540bbdbe368f4cfb123"} Jan 26 13:21:27 crc kubenswrapper[4844]: I0126 13:21:27.130477 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 13:21:27 crc kubenswrapper[4844]: I0126 13:21:27.160847 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.2454483 podStartE2EDuration="5.160827023s" podCreationTimestamp="2026-01-26 13:21:22 +0000 UTC" firstStartedPulling="2026-01-26 13:21:22.966729837 +0000 UTC m=+2259.900097449" lastFinishedPulling="2026-01-26 13:21:25.88210852 +0000 UTC m=+2262.815476172" observedRunningTime="2026-01-26 13:21:27.15824439 +0000 UTC m=+2264.091612042" watchObservedRunningTime="2026-01-26 13:21:27.160827023 +0000 UTC m=+2264.094194645" Jan 26 13:21:30 crc kubenswrapper[4844]: I0126 13:21:30.964455 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.428669 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-btlm2"] Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.429922 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-btlm2" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.433315 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.433662 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.444320 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-btlm2"] Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.521272 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac64bcd-c0e5-44c8-9c11-abede4806663-scripts\") pod \"nova-cell0-cell-mapping-btlm2\" (UID: \"1ac64bcd-c0e5-44c8-9c11-abede4806663\") " pod="openstack/nova-cell0-cell-mapping-btlm2" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.521362 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac64bcd-c0e5-44c8-9c11-abede4806663-config-data\") pod \"nova-cell0-cell-mapping-btlm2\" (UID: \"1ac64bcd-c0e5-44c8-9c11-abede4806663\") " pod="openstack/nova-cell0-cell-mapping-btlm2" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.521480 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac64bcd-c0e5-44c8-9c11-abede4806663-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-btlm2\" (UID: \"1ac64bcd-c0e5-44c8-9c11-abede4806663\") " pod="openstack/nova-cell0-cell-mapping-btlm2" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.521753 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8sht\" (UniqueName: \"kubernetes.io/projected/1ac64bcd-c0e5-44c8-9c11-abede4806663-kube-api-access-c8sht\") pod \"nova-cell0-cell-mapping-btlm2\" (UID: \"1ac64bcd-c0e5-44c8-9c11-abede4806663\") " pod="openstack/nova-cell0-cell-mapping-btlm2" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.623661 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8sht\" (UniqueName: \"kubernetes.io/projected/1ac64bcd-c0e5-44c8-9c11-abede4806663-kube-api-access-c8sht\") pod \"nova-cell0-cell-mapping-btlm2\" (UID: \"1ac64bcd-c0e5-44c8-9c11-abede4806663\") " pod="openstack/nova-cell0-cell-mapping-btlm2" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.624073 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac64bcd-c0e5-44c8-9c11-abede4806663-scripts\") pod \"nova-cell0-cell-mapping-btlm2\" (UID: \"1ac64bcd-c0e5-44c8-9c11-abede4806663\") " pod="openstack/nova-cell0-cell-mapping-btlm2" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.624274 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac64bcd-c0e5-44c8-9c11-abede4806663-config-data\") pod \"nova-cell0-cell-mapping-btlm2\" (UID: \"1ac64bcd-c0e5-44c8-9c11-abede4806663\") " pod="openstack/nova-cell0-cell-mapping-btlm2" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.625218 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac64bcd-c0e5-44c8-9c11-abede4806663-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-btlm2\" (UID: \"1ac64bcd-c0e5-44c8-9c11-abede4806663\") " pod="openstack/nova-cell0-cell-mapping-btlm2" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.626833 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.633523 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac64bcd-c0e5-44c8-9c11-abede4806663-scripts\") pod \"nova-cell0-cell-mapping-btlm2\" (UID: \"1ac64bcd-c0e5-44c8-9c11-abede4806663\") " pod="openstack/nova-cell0-cell-mapping-btlm2" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.636723 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac64bcd-c0e5-44c8-9c11-abede4806663-config-data\") pod \"nova-cell0-cell-mapping-btlm2\" (UID: \"1ac64bcd-c0e5-44c8-9c11-abede4806663\") " pod="openstack/nova-cell0-cell-mapping-btlm2" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.636722 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac64bcd-c0e5-44c8-9c11-abede4806663-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-btlm2\" (UID: \"1ac64bcd-c0e5-44c8-9c11-abede4806663\") " pod="openstack/nova-cell0-cell-mapping-btlm2" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.646895 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8sht\" (UniqueName: \"kubernetes.io/projected/1ac64bcd-c0e5-44c8-9c11-abede4806663-kube-api-access-c8sht\") pod \"nova-cell0-cell-mapping-btlm2\" (UID: \"1ac64bcd-c0e5-44c8-9c11-abede4806663\") " pod="openstack/nova-cell0-cell-mapping-btlm2" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.648548 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.648647 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.650880 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.678357 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.679712 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.681904 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.727122 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0400142-4fe7-4b74-822f-eee67c1bf20b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0400142-4fe7-4b74-822f-eee67c1bf20b\") " pod="openstack/nova-scheduler-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.727176 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bgcn\" (UniqueName: \"kubernetes.io/projected/b0400142-4fe7-4b74-822f-eee67c1bf20b-kube-api-access-2bgcn\") pod \"nova-scheduler-0\" (UID: \"b0400142-4fe7-4b74-822f-eee67c1bf20b\") " pod="openstack/nova-scheduler-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.727224 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b824ba-48ac-4a25-85de-436c4dd6c016-config-data\") pod \"nova-api-0\" (UID: \"48b824ba-48ac-4a25-85de-436c4dd6c016\") " pod="openstack/nova-api-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.727329 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b824ba-48ac-4a25-85de-436c4dd6c016-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"48b824ba-48ac-4a25-85de-436c4dd6c016\") " pod="openstack/nova-api-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.727427 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9k2v\" (UniqueName: \"kubernetes.io/projected/48b824ba-48ac-4a25-85de-436c4dd6c016-kube-api-access-m9k2v\") pod \"nova-api-0\" (UID: \"48b824ba-48ac-4a25-85de-436c4dd6c016\") " pod="openstack/nova-api-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.727455 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0400142-4fe7-4b74-822f-eee67c1bf20b-config-data\") pod \"nova-scheduler-0\" (UID: \"b0400142-4fe7-4b74-822f-eee67c1bf20b\") " pod="openstack/nova-scheduler-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.727564 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48b824ba-48ac-4a25-85de-436c4dd6c016-logs\") pod \"nova-api-0\" (UID: \"48b824ba-48ac-4a25-85de-436c4dd6c016\") " pod="openstack/nova-api-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.751790 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-btlm2" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.768773 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.819679 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.821097 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.823147 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.830235 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.832317 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9k2v\" (UniqueName: \"kubernetes.io/projected/48b824ba-48ac-4a25-85de-436c4dd6c016-kube-api-access-m9k2v\") pod \"nova-api-0\" (UID: \"48b824ba-48ac-4a25-85de-436c4dd6c016\") " pod="openstack/nova-api-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.832533 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0400142-4fe7-4b74-822f-eee67c1bf20b-config-data\") pod \"nova-scheduler-0\" (UID: \"b0400142-4fe7-4b74-822f-eee67c1bf20b\") " pod="openstack/nova-scheduler-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.832715 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48b824ba-48ac-4a25-85de-436c4dd6c016-logs\") pod \"nova-api-0\" (UID: \"48b824ba-48ac-4a25-85de-436c4dd6c016\") " pod="openstack/nova-api-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.833804 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0400142-4fe7-4b74-822f-eee67c1bf20b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0400142-4fe7-4b74-822f-eee67c1bf20b\") " pod="openstack/nova-scheduler-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.833940 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bgcn\" (UniqueName: \"kubernetes.io/projected/b0400142-4fe7-4b74-822f-eee67c1bf20b-kube-api-access-2bgcn\") pod \"nova-scheduler-0\" (UID: \"b0400142-4fe7-4b74-822f-eee67c1bf20b\") " pod="openstack/nova-scheduler-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.834084 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b824ba-48ac-4a25-85de-436c4dd6c016-config-data\") pod \"nova-api-0\" (UID: \"48b824ba-48ac-4a25-85de-436c4dd6c016\") " pod="openstack/nova-api-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.834316 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b824ba-48ac-4a25-85de-436c4dd6c016-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"48b824ba-48ac-4a25-85de-436c4dd6c016\") " pod="openstack/nova-api-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.838692 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b824ba-48ac-4a25-85de-436c4dd6c016-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"48b824ba-48ac-4a25-85de-436c4dd6c016\") " pod="openstack/nova-api-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.841001 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48b824ba-48ac-4a25-85de-436c4dd6c016-logs\") pod \"nova-api-0\" (UID: \"48b824ba-48ac-4a25-85de-436c4dd6c016\") " pod="openstack/nova-api-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.846326 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0400142-4fe7-4b74-822f-eee67c1bf20b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0400142-4fe7-4b74-822f-eee67c1bf20b\") " pod="openstack/nova-scheduler-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.852208 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0400142-4fe7-4b74-822f-eee67c1bf20b-config-data\") pod \"nova-scheduler-0\" (UID: \"b0400142-4fe7-4b74-822f-eee67c1bf20b\") " pod="openstack/nova-scheduler-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.860314 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b824ba-48ac-4a25-85de-436c4dd6c016-config-data\") pod \"nova-api-0\" (UID: \"48b824ba-48ac-4a25-85de-436c4dd6c016\") " pod="openstack/nova-api-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.882163 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9k2v\" (UniqueName: \"kubernetes.io/projected/48b824ba-48ac-4a25-85de-436c4dd6c016-kube-api-access-m9k2v\") pod \"nova-api-0\" (UID: \"48b824ba-48ac-4a25-85de-436c4dd6c016\") " pod="openstack/nova-api-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.890175 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bgcn\" (UniqueName: \"kubernetes.io/projected/b0400142-4fe7-4b74-822f-eee67c1bf20b-kube-api-access-2bgcn\") pod \"nova-scheduler-0\" (UID: \"b0400142-4fe7-4b74-822f-eee67c1bf20b\") " pod="openstack/nova-scheduler-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.939681 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b81065-1990-4734-a78a-3172d68df686-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5b81065-1990-4734-a78a-3172d68df686\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.939720 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtt4l\" (UniqueName: \"kubernetes.io/projected/e5b81065-1990-4734-a78a-3172d68df686-kube-api-access-jtt4l\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5b81065-1990-4734-a78a-3172d68df686\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.939828 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5b81065-1990-4734-a78a-3172d68df686-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5b81065-1990-4734-a78a-3172d68df686\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.950945 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.953013 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:21:31 crc kubenswrapper[4844]: I0126 13:21:31.979005 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.069837 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.087125 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9698c91-6def-408a-b9c8-b698138801d7-logs\") pod \"nova-metadata-0\" (UID: \"c9698c91-6def-408a-b9c8-b698138801d7\") " pod="openstack/nova-metadata-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.087185 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rszbj\" (UniqueName: \"kubernetes.io/projected/c9698c91-6def-408a-b9c8-b698138801d7-kube-api-access-rszbj\") pod \"nova-metadata-0\" (UID: \"c9698c91-6def-408a-b9c8-b698138801d7\") " pod="openstack/nova-metadata-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.087269 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5b81065-1990-4734-a78a-3172d68df686-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5b81065-1990-4734-a78a-3172d68df686\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.087382 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9698c91-6def-408a-b9c8-b698138801d7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c9698c91-6def-408a-b9c8-b698138801d7\") " pod="openstack/nova-metadata-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.087407 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9698c91-6def-408a-b9c8-b698138801d7-config-data\") pod \"nova-metadata-0\" (UID: \"c9698c91-6def-408a-b9c8-b698138801d7\") " pod="openstack/nova-metadata-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.087527 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b81065-1990-4734-a78a-3172d68df686-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5b81065-1990-4734-a78a-3172d68df686\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.087557 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtt4l\" (UniqueName: \"kubernetes.io/projected/e5b81065-1990-4734-a78a-3172d68df686-kube-api-access-jtt4l\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5b81065-1990-4734-a78a-3172d68df686\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.088442 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.103528 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.104781 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b81065-1990-4734-a78a-3172d68df686-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5b81065-1990-4734-a78a-3172d68df686\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.105500 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5b81065-1990-4734-a78a-3172d68df686-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5b81065-1990-4734-a78a-3172d68df686\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.160058 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtt4l\" (UniqueName: \"kubernetes.io/projected/e5b81065-1990-4734-a78a-3172d68df686-kube-api-access-jtt4l\") pod \"nova-cell1-novncproxy-0\" (UID: \"e5b81065-1990-4734-a78a-3172d68df686\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.175654 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-684f48dcbc-vswkx"] Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.177495 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.204170 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9698c91-6def-408a-b9c8-b698138801d7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c9698c91-6def-408a-b9c8-b698138801d7\") " pod="openstack/nova-metadata-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.204201 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9698c91-6def-408a-b9c8-b698138801d7-config-data\") pod \"nova-metadata-0\" (UID: \"c9698c91-6def-408a-b9c8-b698138801d7\") " pod="openstack/nova-metadata-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.204323 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9698c91-6def-408a-b9c8-b698138801d7-logs\") pod \"nova-metadata-0\" (UID: \"c9698c91-6def-408a-b9c8-b698138801d7\") " pod="openstack/nova-metadata-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.204342 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rszbj\" (UniqueName: \"kubernetes.io/projected/c9698c91-6def-408a-b9c8-b698138801d7-kube-api-access-rszbj\") pod \"nova-metadata-0\" (UID: \"c9698c91-6def-408a-b9c8-b698138801d7\") " pod="openstack/nova-metadata-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.206996 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9698c91-6def-408a-b9c8-b698138801d7-logs\") pod \"nova-metadata-0\" (UID: \"c9698c91-6def-408a-b9c8-b698138801d7\") " pod="openstack/nova-metadata-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.213265 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9698c91-6def-408a-b9c8-b698138801d7-config-data\") pod \"nova-metadata-0\" (UID: \"c9698c91-6def-408a-b9c8-b698138801d7\") " pod="openstack/nova-metadata-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.214134 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9698c91-6def-408a-b9c8-b698138801d7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c9698c91-6def-408a-b9c8-b698138801d7\") " pod="openstack/nova-metadata-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.240940 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-684f48dcbc-vswkx"] Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.245944 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rszbj\" (UniqueName: \"kubernetes.io/projected/c9698c91-6def-408a-b9c8-b698138801d7-kube-api-access-rszbj\") pod \"nova-metadata-0\" (UID: \"c9698c91-6def-408a-b9c8-b698138801d7\") " pod="openstack/nova-metadata-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.307813 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-ovsdbserver-nb\") pod \"dnsmasq-dns-684f48dcbc-vswkx\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.307848 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-config\") pod \"dnsmasq-dns-684f48dcbc-vswkx\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.307875 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgkzm\" (UniqueName: \"kubernetes.io/projected/68596a47-7ecd-431f-8b10-00479d94c556-kube-api-access-jgkzm\") pod \"dnsmasq-dns-684f48dcbc-vswkx\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.307923 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-dns-swift-storage-0\") pod \"dnsmasq-dns-684f48dcbc-vswkx\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.307971 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-dns-svc\") pod \"dnsmasq-dns-684f48dcbc-vswkx\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.308002 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-ovsdbserver-sb\") pod \"dnsmasq-dns-684f48dcbc-vswkx\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.333105 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:21:32 crc kubenswrapper[4844]: E0126 13:21:32.333393 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.398992 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.411524 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-dns-svc\") pod \"dnsmasq-dns-684f48dcbc-vswkx\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.411584 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-ovsdbserver-sb\") pod \"dnsmasq-dns-684f48dcbc-vswkx\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.411918 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-ovsdbserver-nb\") pod \"dnsmasq-dns-684f48dcbc-vswkx\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.411940 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-config\") pod \"dnsmasq-dns-684f48dcbc-vswkx\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.411975 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgkzm\" (UniqueName: \"kubernetes.io/projected/68596a47-7ecd-431f-8b10-00479d94c556-kube-api-access-jgkzm\") pod \"dnsmasq-dns-684f48dcbc-vswkx\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.412044 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-dns-swift-storage-0\") pod \"dnsmasq-dns-684f48dcbc-vswkx\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.415902 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.416165 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-dns-swift-storage-0\") pod \"dnsmasq-dns-684f48dcbc-vswkx\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.430739 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-ovsdbserver-sb\") pod \"dnsmasq-dns-684f48dcbc-vswkx\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.430916 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-ovsdbserver-nb\") pod \"dnsmasq-dns-684f48dcbc-vswkx\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.431499 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-config\") pod \"dnsmasq-dns-684f48dcbc-vswkx\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.432731 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-dns-svc\") pod \"dnsmasq-dns-684f48dcbc-vswkx\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:32 crc kubenswrapper[4844]: I0126 13:21:32.439853 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgkzm\" (UniqueName: \"kubernetes.io/projected/68596a47-7ecd-431f-8b10-00479d94c556-kube-api-access-jgkzm\") pod \"dnsmasq-dns-684f48dcbc-vswkx\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:32.736129 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:32.783910 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:32.800855 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-btlm2"] Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:32.814287 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.028817 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9gsdl"] Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.030285 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9gsdl" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.032196 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.032203 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.053000 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9gsdl"] Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.129179 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f37882c-17e3-4c70-a309-ee70392fed88-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9gsdl\" (UID: \"0f37882c-17e3-4c70-a309-ee70392fed88\") " pod="openstack/nova-cell1-conductor-db-sync-9gsdl" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.129233 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdnt4\" (UniqueName: \"kubernetes.io/projected/0f37882c-17e3-4c70-a309-ee70392fed88-kube-api-access-bdnt4\") pod \"nova-cell1-conductor-db-sync-9gsdl\" (UID: \"0f37882c-17e3-4c70-a309-ee70392fed88\") " pod="openstack/nova-cell1-conductor-db-sync-9gsdl" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.129469 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f37882c-17e3-4c70-a309-ee70392fed88-scripts\") pod \"nova-cell1-conductor-db-sync-9gsdl\" (UID: \"0f37882c-17e3-4c70-a309-ee70392fed88\") " pod="openstack/nova-cell1-conductor-db-sync-9gsdl" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.129752 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f37882c-17e3-4c70-a309-ee70392fed88-config-data\") pod \"nova-cell1-conductor-db-sync-9gsdl\" (UID: \"0f37882c-17e3-4c70-a309-ee70392fed88\") " pod="openstack/nova-cell1-conductor-db-sync-9gsdl" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.216787 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-btlm2" event={"ID":"1ac64bcd-c0e5-44c8-9c11-abede4806663","Type":"ContainerStarted","Data":"1ad177eb0e519c75e9d75bcdb7b4a0fdeceb08a4f5b1a961b3c0c6567ed1d6f5"} Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.216833 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-btlm2" event={"ID":"1ac64bcd-c0e5-44c8-9c11-abede4806663","Type":"ContainerStarted","Data":"284c3c69f7362ea398976b9330cfeee6691e77c9a30bf5f7c2b01f81fe894333"} Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.222048 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48b824ba-48ac-4a25-85de-436c4dd6c016","Type":"ContainerStarted","Data":"e0bff44e2c85a247483f7bf9a7b55cb2007b6c7f16dd0085b196e0786322e2a6"} Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.223728 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0400142-4fe7-4b74-822f-eee67c1bf20b","Type":"ContainerStarted","Data":"98d220bfbe1aa32cc412dd383ab85d675f5de3c566ed1251dfd4e4f8a21a1ed1"} Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.232640 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f37882c-17e3-4c70-a309-ee70392fed88-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9gsdl\" (UID: \"0f37882c-17e3-4c70-a309-ee70392fed88\") " pod="openstack/nova-cell1-conductor-db-sync-9gsdl" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.232719 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdnt4\" (UniqueName: \"kubernetes.io/projected/0f37882c-17e3-4c70-a309-ee70392fed88-kube-api-access-bdnt4\") pod \"nova-cell1-conductor-db-sync-9gsdl\" (UID: \"0f37882c-17e3-4c70-a309-ee70392fed88\") " pod="openstack/nova-cell1-conductor-db-sync-9gsdl" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.232833 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f37882c-17e3-4c70-a309-ee70392fed88-scripts\") pod \"nova-cell1-conductor-db-sync-9gsdl\" (UID: \"0f37882c-17e3-4c70-a309-ee70392fed88\") " pod="openstack/nova-cell1-conductor-db-sync-9gsdl" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.232991 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f37882c-17e3-4c70-a309-ee70392fed88-config-data\") pod \"nova-cell1-conductor-db-sync-9gsdl\" (UID: \"0f37882c-17e3-4c70-a309-ee70392fed88\") " pod="openstack/nova-cell1-conductor-db-sync-9gsdl" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.241229 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f37882c-17e3-4c70-a309-ee70392fed88-scripts\") pod \"nova-cell1-conductor-db-sync-9gsdl\" (UID: \"0f37882c-17e3-4c70-a309-ee70392fed88\") " pod="openstack/nova-cell1-conductor-db-sync-9gsdl" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.241351 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f37882c-17e3-4c70-a309-ee70392fed88-config-data\") pod \"nova-cell1-conductor-db-sync-9gsdl\" (UID: \"0f37882c-17e3-4c70-a309-ee70392fed88\") " pod="openstack/nova-cell1-conductor-db-sync-9gsdl" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.242519 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-btlm2" podStartSLOduration=2.242493791 podStartE2EDuration="2.242493791s" podCreationTimestamp="2026-01-26 13:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:21:33.230196024 +0000 UTC m=+2270.163563636" watchObservedRunningTime="2026-01-26 13:21:33.242493791 +0000 UTC m=+2270.175861403" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.253165 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdnt4\" (UniqueName: \"kubernetes.io/projected/0f37882c-17e3-4c70-a309-ee70392fed88-kube-api-access-bdnt4\") pod \"nova-cell1-conductor-db-sync-9gsdl\" (UID: \"0f37882c-17e3-4c70-a309-ee70392fed88\") " pod="openstack/nova-cell1-conductor-db-sync-9gsdl" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.253727 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f37882c-17e3-4c70-a309-ee70392fed88-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9gsdl\" (UID: \"0f37882c-17e3-4c70-a309-ee70392fed88\") " pod="openstack/nova-cell1-conductor-db-sync-9gsdl" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.368815 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9gsdl" Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.702924 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 13:21:33 crc kubenswrapper[4844]: I0126 13:21:33.727547 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:21:34 crc kubenswrapper[4844]: I0126 13:21:34.066683 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-684f48dcbc-vswkx"] Jan 26 13:21:34 crc kubenswrapper[4844]: I0126 13:21:34.074956 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9gsdl"] Jan 26 13:21:34 crc kubenswrapper[4844]: W0126 13:21:34.969290 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9698c91_6def_408a_b9c8_b698138801d7.slice/crio-8c68ef54240c7ac328419089d332f84e5daf7008b5c933680a276a4af86a9ad5 WatchSource:0}: Error finding container 8c68ef54240c7ac328419089d332f84e5daf7008b5c933680a276a4af86a9ad5: Status 404 returned error can't find the container with id 8c68ef54240c7ac328419089d332f84e5daf7008b5c933680a276a4af86a9ad5 Jan 26 13:21:35 crc kubenswrapper[4844]: I0126 13:21:35.257725 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9698c91-6def-408a-b9c8-b698138801d7","Type":"ContainerStarted","Data":"8c68ef54240c7ac328419089d332f84e5daf7008b5c933680a276a4af86a9ad5"} Jan 26 13:21:35 crc kubenswrapper[4844]: W0126 13:21:35.382532 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5b81065_1990_4734_a78a_3172d68df686.slice/crio-3b128a8a973c45d34663f668df6ef664cde0eaa00e0d436a1e78da34bc3502d8 WatchSource:0}: Error finding container 3b128a8a973c45d34663f668df6ef664cde0eaa00e0d436a1e78da34bc3502d8: Status 404 returned error can't find the container with id 3b128a8a973c45d34663f668df6ef664cde0eaa00e0d436a1e78da34bc3502d8 Jan 26 13:21:35 crc kubenswrapper[4844]: W0126 13:21:35.384819 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f37882c_17e3_4c70_a309_ee70392fed88.slice/crio-29f27a68ca66e0345bf83d223bc44520bcc88d1f130147050c0d95c54c8304cf WatchSource:0}: Error finding container 29f27a68ca66e0345bf83d223bc44520bcc88d1f130147050c0d95c54c8304cf: Status 404 returned error can't find the container with id 29f27a68ca66e0345bf83d223bc44520bcc88d1f130147050c0d95c54c8304cf Jan 26 13:21:35 crc kubenswrapper[4844]: I0126 13:21:35.556110 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 13:21:35 crc kubenswrapper[4844]: I0126 13:21:35.573883 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:21:36 crc kubenswrapper[4844]: I0126 13:21:36.272927 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9698c91-6def-408a-b9c8-b698138801d7","Type":"ContainerStarted","Data":"fc580ff3c05f2ebb5efbd8aaddab3e9b74983f023df110ff635dfa59c4fe326e"} Jan 26 13:21:36 crc kubenswrapper[4844]: I0126 13:21:36.276162 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0400142-4fe7-4b74-822f-eee67c1bf20b","Type":"ContainerStarted","Data":"6dbedc7c01c8acb0ccd15939896c968f133747087aeeb55a190225bdd020f833"} Jan 26 13:21:36 crc kubenswrapper[4844]: I0126 13:21:36.277635 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9gsdl" event={"ID":"0f37882c-17e3-4c70-a309-ee70392fed88","Type":"ContainerStarted","Data":"1bb4993fb205800439b0a8823ccb1d8840270fab753df601ee0cb69703f656d8"} Jan 26 13:21:36 crc kubenswrapper[4844]: I0126 13:21:36.277672 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9gsdl" event={"ID":"0f37882c-17e3-4c70-a309-ee70392fed88","Type":"ContainerStarted","Data":"29f27a68ca66e0345bf83d223bc44520bcc88d1f130147050c0d95c54c8304cf"} Jan 26 13:21:36 crc kubenswrapper[4844]: I0126 13:21:36.279510 4844 generic.go:334] "Generic (PLEG): container finished" podID="68596a47-7ecd-431f-8b10-00479d94c556" containerID="8989d1a8c08c45a13da524c4e8685da0dfe1021baff58f5dad14f6b102d6f6e8" exitCode=0 Jan 26 13:21:36 crc kubenswrapper[4844]: I0126 13:21:36.279564 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" event={"ID":"68596a47-7ecd-431f-8b10-00479d94c556","Type":"ContainerDied","Data":"8989d1a8c08c45a13da524c4e8685da0dfe1021baff58f5dad14f6b102d6f6e8"} Jan 26 13:21:36 crc kubenswrapper[4844]: I0126 13:21:36.279580 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" event={"ID":"68596a47-7ecd-431f-8b10-00479d94c556","Type":"ContainerStarted","Data":"a6fd55f9ce401591826a85d47fe23ce3964e4b53cff0c5fc83fe7c4a3ca7bb8f"} Jan 26 13:21:36 crc kubenswrapper[4844]: I0126 13:21:36.282351 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e5b81065-1990-4734-a78a-3172d68df686","Type":"ContainerStarted","Data":"3b128a8a973c45d34663f668df6ef664cde0eaa00e0d436a1e78da34bc3502d8"} Jan 26 13:21:36 crc kubenswrapper[4844]: I0126 13:21:36.287263 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48b824ba-48ac-4a25-85de-436c4dd6c016","Type":"ContainerStarted","Data":"07438575207e3019b35bc534e4b3f87278f65af77c874f7c95ce5a8451d36b00"} Jan 26 13:21:36 crc kubenswrapper[4844]: I0126 13:21:36.298750 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.690519602 podStartE2EDuration="5.298732858s" podCreationTimestamp="2026-01-26 13:21:31 +0000 UTC" firstStartedPulling="2026-01-26 13:21:32.862142216 +0000 UTC m=+2269.795509818" lastFinishedPulling="2026-01-26 13:21:35.470355452 +0000 UTC m=+2272.403723074" observedRunningTime="2026-01-26 13:21:36.291978685 +0000 UTC m=+2273.225346317" watchObservedRunningTime="2026-01-26 13:21:36.298732858 +0000 UTC m=+2273.232100470" Jan 26 13:21:36 crc kubenswrapper[4844]: I0126 13:21:36.348995 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.7275812090000002 podStartE2EDuration="5.348978443s" podCreationTimestamp="2026-01-26 13:21:31 +0000 UTC" firstStartedPulling="2026-01-26 13:21:32.847974074 +0000 UTC m=+2269.781341686" lastFinishedPulling="2026-01-26 13:21:35.469371308 +0000 UTC m=+2272.402738920" observedRunningTime="2026-01-26 13:21:36.332042644 +0000 UTC m=+2273.265410256" watchObservedRunningTime="2026-01-26 13:21:36.348978443 +0000 UTC m=+2273.282346055" Jan 26 13:21:36 crc kubenswrapper[4844]: I0126 13:21:36.349844 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-9gsdl" podStartSLOduration=4.349837614 podStartE2EDuration="4.349837614s" podCreationTimestamp="2026-01-26 13:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:21:36.346708548 +0000 UTC m=+2273.280076160" watchObservedRunningTime="2026-01-26 13:21:36.349837614 +0000 UTC m=+2273.283205226" Jan 26 13:21:37 crc kubenswrapper[4844]: I0126 13:21:37.105516 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 13:21:37 crc kubenswrapper[4844]: I0126 13:21:37.301849 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48b824ba-48ac-4a25-85de-436c4dd6c016","Type":"ContainerStarted","Data":"f0dbe3890d03ca2f5388cece06a190c1028be58ed639c24ab71abbc06c3d71c1"} Jan 26 13:21:37 crc kubenswrapper[4844]: I0126 13:21:37.307156 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9698c91-6def-408a-b9c8-b698138801d7","Type":"ContainerStarted","Data":"c4c6b3277abba796cb3363457a2cad94e31327de9724def46da7235c6b8adb77"} Jan 26 13:21:37 crc kubenswrapper[4844]: I0126 13:21:37.307694 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c9698c91-6def-408a-b9c8-b698138801d7" containerName="nova-metadata-log" containerID="cri-o://fc580ff3c05f2ebb5efbd8aaddab3e9b74983f023df110ff635dfa59c4fe326e" gracePeriod=30 Jan 26 13:21:37 crc kubenswrapper[4844]: I0126 13:21:37.307869 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c9698c91-6def-408a-b9c8-b698138801d7" containerName="nova-metadata-metadata" containerID="cri-o://c4c6b3277abba796cb3363457a2cad94e31327de9724def46da7235c6b8adb77" gracePeriod=30 Jan 26 13:21:37 crc kubenswrapper[4844]: I0126 13:21:37.329529 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.776993002 podStartE2EDuration="6.329509358s" podCreationTimestamp="2026-01-26 13:21:31 +0000 UTC" firstStartedPulling="2026-01-26 13:21:34.976251017 +0000 UTC m=+2271.909618629" lastFinishedPulling="2026-01-26 13:21:35.528767373 +0000 UTC m=+2272.462134985" observedRunningTime="2026-01-26 13:21:37.326979467 +0000 UTC m=+2274.260347079" watchObservedRunningTime="2026-01-26 13:21:37.329509358 +0000 UTC m=+2274.262876980" Jan 26 13:21:37 crc kubenswrapper[4844]: I0126 13:21:37.417718 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 13:21:37 crc kubenswrapper[4844]: I0126 13:21:37.417928 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 13:21:37 crc kubenswrapper[4844]: I0126 13:21:37.854834 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:21:37 crc kubenswrapper[4844]: I0126 13:21:37.942582 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rszbj\" (UniqueName: \"kubernetes.io/projected/c9698c91-6def-408a-b9c8-b698138801d7-kube-api-access-rszbj\") pod \"c9698c91-6def-408a-b9c8-b698138801d7\" (UID: \"c9698c91-6def-408a-b9c8-b698138801d7\") " Jan 26 13:21:37 crc kubenswrapper[4844]: I0126 13:21:37.942745 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9698c91-6def-408a-b9c8-b698138801d7-combined-ca-bundle\") pod \"c9698c91-6def-408a-b9c8-b698138801d7\" (UID: \"c9698c91-6def-408a-b9c8-b698138801d7\") " Jan 26 13:21:37 crc kubenswrapper[4844]: I0126 13:21:37.942797 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9698c91-6def-408a-b9c8-b698138801d7-logs\") pod \"c9698c91-6def-408a-b9c8-b698138801d7\" (UID: \"c9698c91-6def-408a-b9c8-b698138801d7\") " Jan 26 13:21:37 crc kubenswrapper[4844]: I0126 13:21:37.942877 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9698c91-6def-408a-b9c8-b698138801d7-config-data\") pod \"c9698c91-6def-408a-b9c8-b698138801d7\" (UID: \"c9698c91-6def-408a-b9c8-b698138801d7\") " Jan 26 13:21:37 crc kubenswrapper[4844]: I0126 13:21:37.943630 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9698c91-6def-408a-b9c8-b698138801d7-logs" (OuterVolumeSpecName: "logs") pod "c9698c91-6def-408a-b9c8-b698138801d7" (UID: "c9698c91-6def-408a-b9c8-b698138801d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:21:37 crc kubenswrapper[4844]: I0126 13:21:37.951957 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9698c91-6def-408a-b9c8-b698138801d7-kube-api-access-rszbj" (OuterVolumeSpecName: "kube-api-access-rszbj") pod "c9698c91-6def-408a-b9c8-b698138801d7" (UID: "c9698c91-6def-408a-b9c8-b698138801d7"). InnerVolumeSpecName "kube-api-access-rszbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:21:37 crc kubenswrapper[4844]: I0126 13:21:37.974652 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9698c91-6def-408a-b9c8-b698138801d7-config-data" (OuterVolumeSpecName: "config-data") pod "c9698c91-6def-408a-b9c8-b698138801d7" (UID: "c9698c91-6def-408a-b9c8-b698138801d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:37 crc kubenswrapper[4844]: I0126 13:21:37.978670 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9698c91-6def-408a-b9c8-b698138801d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9698c91-6def-408a-b9c8-b698138801d7" (UID: "c9698c91-6def-408a-b9c8-b698138801d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.045010 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9698c91-6def-408a-b9c8-b698138801d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.045044 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rszbj\" (UniqueName: \"kubernetes.io/projected/c9698c91-6def-408a-b9c8-b698138801d7-kube-api-access-rszbj\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.045056 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9698c91-6def-408a-b9c8-b698138801d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.045064 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9698c91-6def-408a-b9c8-b698138801d7-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.319279 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" event={"ID":"68596a47-7ecd-431f-8b10-00479d94c556","Type":"ContainerStarted","Data":"e89fe742b487f51f0f90761df3dd503c98581b86c48629f7dc8cfb9d69d5a120"} Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.320109 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.321934 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e5b81065-1990-4734-a78a-3172d68df686","Type":"ContainerStarted","Data":"90a9cd29d6650e34a0a0a05b983cee590249abec321098a5864f8b02bde8bc7b"} Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.322039 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="e5b81065-1990-4734-a78a-3172d68df686" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://90a9cd29d6650e34a0a0a05b983cee590249abec321098a5864f8b02bde8bc7b" gracePeriod=30 Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.329742 4844 generic.go:334] "Generic (PLEG): container finished" podID="c9698c91-6def-408a-b9c8-b698138801d7" containerID="c4c6b3277abba796cb3363457a2cad94e31327de9724def46da7235c6b8adb77" exitCode=0 Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.329771 4844 generic.go:334] "Generic (PLEG): container finished" podID="c9698c91-6def-408a-b9c8-b698138801d7" containerID="fc580ff3c05f2ebb5efbd8aaddab3e9b74983f023df110ff635dfa59c4fe326e" exitCode=143 Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.330706 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.333718 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9698c91-6def-408a-b9c8-b698138801d7","Type":"ContainerDied","Data":"c4c6b3277abba796cb3363457a2cad94e31327de9724def46da7235c6b8adb77"} Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.333796 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9698c91-6def-408a-b9c8-b698138801d7","Type":"ContainerDied","Data":"fc580ff3c05f2ebb5efbd8aaddab3e9b74983f023df110ff635dfa59c4fe326e"} Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.333816 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9698c91-6def-408a-b9c8-b698138801d7","Type":"ContainerDied","Data":"8c68ef54240c7ac328419089d332f84e5daf7008b5c933680a276a4af86a9ad5"} Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.333841 4844 scope.go:117] "RemoveContainer" containerID="c4c6b3277abba796cb3363457a2cad94e31327de9724def46da7235c6b8adb77" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.362800 4844 scope.go:117] "RemoveContainer" containerID="fc580ff3c05f2ebb5efbd8aaddab3e9b74983f023df110ff635dfa59c4fe326e" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.366551 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" podStartSLOduration=6.366532779 podStartE2EDuration="6.366532779s" podCreationTimestamp="2026-01-26 13:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:21:38.355348758 +0000 UTC m=+2275.288716380" watchObservedRunningTime="2026-01-26 13:21:38.366532779 +0000 UTC m=+2275.299900391" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.391008 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=5.687518889 podStartE2EDuration="7.390992311s" podCreationTimestamp="2026-01-26 13:21:31 +0000 UTC" firstStartedPulling="2026-01-26 13:21:35.384878726 +0000 UTC m=+2272.318246338" lastFinishedPulling="2026-01-26 13:21:37.088352148 +0000 UTC m=+2274.021719760" observedRunningTime="2026-01-26 13:21:38.384137255 +0000 UTC m=+2275.317504857" watchObservedRunningTime="2026-01-26 13:21:38.390992311 +0000 UTC m=+2275.324359923" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.402218 4844 scope.go:117] "RemoveContainer" containerID="c4c6b3277abba796cb3363457a2cad94e31327de9724def46da7235c6b8adb77" Jan 26 13:21:38 crc kubenswrapper[4844]: E0126 13:21:38.402532 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4c6b3277abba796cb3363457a2cad94e31327de9724def46da7235c6b8adb77\": container with ID starting with c4c6b3277abba796cb3363457a2cad94e31327de9724def46da7235c6b8adb77 not found: ID does not exist" containerID="c4c6b3277abba796cb3363457a2cad94e31327de9724def46da7235c6b8adb77" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.402560 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4c6b3277abba796cb3363457a2cad94e31327de9724def46da7235c6b8adb77"} err="failed to get container status \"c4c6b3277abba796cb3363457a2cad94e31327de9724def46da7235c6b8adb77\": rpc error: code = NotFound desc = could not find container \"c4c6b3277abba796cb3363457a2cad94e31327de9724def46da7235c6b8adb77\": container with ID starting with c4c6b3277abba796cb3363457a2cad94e31327de9724def46da7235c6b8adb77 not found: ID does not exist" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.402579 4844 scope.go:117] "RemoveContainer" containerID="fc580ff3c05f2ebb5efbd8aaddab3e9b74983f023df110ff635dfa59c4fe326e" Jan 26 13:21:38 crc kubenswrapper[4844]: E0126 13:21:38.403180 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc580ff3c05f2ebb5efbd8aaddab3e9b74983f023df110ff635dfa59c4fe326e\": container with ID starting with fc580ff3c05f2ebb5efbd8aaddab3e9b74983f023df110ff635dfa59c4fe326e not found: ID does not exist" containerID="fc580ff3c05f2ebb5efbd8aaddab3e9b74983f023df110ff635dfa59c4fe326e" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.403201 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc580ff3c05f2ebb5efbd8aaddab3e9b74983f023df110ff635dfa59c4fe326e"} err="failed to get container status \"fc580ff3c05f2ebb5efbd8aaddab3e9b74983f023df110ff635dfa59c4fe326e\": rpc error: code = NotFound desc = could not find container \"fc580ff3c05f2ebb5efbd8aaddab3e9b74983f023df110ff635dfa59c4fe326e\": container with ID starting with fc580ff3c05f2ebb5efbd8aaddab3e9b74983f023df110ff635dfa59c4fe326e not found: ID does not exist" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.403214 4844 scope.go:117] "RemoveContainer" containerID="c4c6b3277abba796cb3363457a2cad94e31327de9724def46da7235c6b8adb77" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.403458 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4c6b3277abba796cb3363457a2cad94e31327de9724def46da7235c6b8adb77"} err="failed to get container status \"c4c6b3277abba796cb3363457a2cad94e31327de9724def46da7235c6b8adb77\": rpc error: code = NotFound desc = could not find container \"c4c6b3277abba796cb3363457a2cad94e31327de9724def46da7235c6b8adb77\": container with ID starting with c4c6b3277abba796cb3363457a2cad94e31327de9724def46da7235c6b8adb77 not found: ID does not exist" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.403498 4844 scope.go:117] "RemoveContainer" containerID="fc580ff3c05f2ebb5efbd8aaddab3e9b74983f023df110ff635dfa59c4fe326e" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.405015 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc580ff3c05f2ebb5efbd8aaddab3e9b74983f023df110ff635dfa59c4fe326e"} err="failed to get container status \"fc580ff3c05f2ebb5efbd8aaddab3e9b74983f023df110ff635dfa59c4fe326e\": rpc error: code = NotFound desc = could not find container \"fc580ff3c05f2ebb5efbd8aaddab3e9b74983f023df110ff635dfa59c4fe326e\": container with ID starting with fc580ff3c05f2ebb5efbd8aaddab3e9b74983f023df110ff635dfa59c4fe326e not found: ID does not exist" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.415517 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.432790 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.444738 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:21:38 crc kubenswrapper[4844]: E0126 13:21:38.445231 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9698c91-6def-408a-b9c8-b698138801d7" containerName="nova-metadata-metadata" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.445247 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9698c91-6def-408a-b9c8-b698138801d7" containerName="nova-metadata-metadata" Jan 26 13:21:38 crc kubenswrapper[4844]: E0126 13:21:38.445268 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9698c91-6def-408a-b9c8-b698138801d7" containerName="nova-metadata-log" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.445274 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9698c91-6def-408a-b9c8-b698138801d7" containerName="nova-metadata-log" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.445468 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9698c91-6def-408a-b9c8-b698138801d7" containerName="nova-metadata-log" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.445488 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9698c91-6def-408a-b9c8-b698138801d7" containerName="nova-metadata-metadata" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.446763 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.448915 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.449225 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.454326 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.555080 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " pod="openstack/nova-metadata-0" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.555133 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4dqk\" (UniqueName: \"kubernetes.io/projected/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-kube-api-access-b4dqk\") pod \"nova-metadata-0\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " pod="openstack/nova-metadata-0" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.555180 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-logs\") pod \"nova-metadata-0\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " pod="openstack/nova-metadata-0" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.555318 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " pod="openstack/nova-metadata-0" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.555347 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-config-data\") pod \"nova-metadata-0\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " pod="openstack/nova-metadata-0" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.657210 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " pod="openstack/nova-metadata-0" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.657281 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4dqk\" (UniqueName: \"kubernetes.io/projected/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-kube-api-access-b4dqk\") pod \"nova-metadata-0\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " pod="openstack/nova-metadata-0" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.657316 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-logs\") pod \"nova-metadata-0\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " pod="openstack/nova-metadata-0" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.657426 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " pod="openstack/nova-metadata-0" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.657448 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-config-data\") pod \"nova-metadata-0\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " pod="openstack/nova-metadata-0" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.658094 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-logs\") pod \"nova-metadata-0\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " pod="openstack/nova-metadata-0" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.666570 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-config-data\") pod \"nova-metadata-0\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " pod="openstack/nova-metadata-0" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.666556 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " pod="openstack/nova-metadata-0" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.666717 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " pod="openstack/nova-metadata-0" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.690081 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4dqk\" (UniqueName: \"kubernetes.io/projected/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-kube-api-access-b4dqk\") pod \"nova-metadata-0\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " pod="openstack/nova-metadata-0" Jan 26 13:21:38 crc kubenswrapper[4844]: I0126 13:21:38.768318 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:21:39 crc kubenswrapper[4844]: I0126 13:21:39.242843 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:21:39 crc kubenswrapper[4844]: I0126 13:21:39.326825 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9698c91-6def-408a-b9c8-b698138801d7" path="/var/lib/kubelet/pods/c9698c91-6def-408a-b9c8-b698138801d7/volumes" Jan 26 13:21:39 crc kubenswrapper[4844]: I0126 13:21:39.348923 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f43ace26-ee90-431e-b8ad-cf31b93c7fe3","Type":"ContainerStarted","Data":"214f14861e2f540b01e5f28353bec2065d1a0871c6c92e8cd1ecbd6b195f71a3"} Jan 26 13:21:40 crc kubenswrapper[4844]: I0126 13:21:40.380349 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f43ace26-ee90-431e-b8ad-cf31b93c7fe3","Type":"ContainerStarted","Data":"1667f1e72dba47ba014668f25f74f9896acf7e8783455c6e4fed0d57171dc24c"} Jan 26 13:21:40 crc kubenswrapper[4844]: I0126 13:21:40.380913 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f43ace26-ee90-431e-b8ad-cf31b93c7fe3","Type":"ContainerStarted","Data":"8de8bb4e9abf3b7f22ec32d21e2dcdcd059f684885f3008155d689f29a775cc3"} Jan 26 13:21:40 crc kubenswrapper[4844]: I0126 13:21:40.406316 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.406294292 podStartE2EDuration="2.406294292s" podCreationTimestamp="2026-01-26 13:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:21:40.403221808 +0000 UTC m=+2277.336589430" watchObservedRunningTime="2026-01-26 13:21:40.406294292 +0000 UTC m=+2277.339661904" Jan 26 13:21:42 crc kubenswrapper[4844]: I0126 13:21:42.090161 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 13:21:42 crc kubenswrapper[4844]: I0126 13:21:42.105188 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 13:21:42 crc kubenswrapper[4844]: I0126 13:21:42.105708 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 13:21:42 crc kubenswrapper[4844]: I0126 13:21:42.137288 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 13:21:42 crc kubenswrapper[4844]: I0126 13:21:42.397044 4844 generic.go:334] "Generic (PLEG): container finished" podID="1ac64bcd-c0e5-44c8-9c11-abede4806663" containerID="1ad177eb0e519c75e9d75bcdb7b4a0fdeceb08a4f5b1a961b3c0c6567ed1d6f5" exitCode=0 Jan 26 13:21:42 crc kubenswrapper[4844]: I0126 13:21:42.397883 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-btlm2" event={"ID":"1ac64bcd-c0e5-44c8-9c11-abede4806663","Type":"ContainerDied","Data":"1ad177eb0e519c75e9d75bcdb7b4a0fdeceb08a4f5b1a961b3c0c6567ed1d6f5"} Jan 26 13:21:42 crc kubenswrapper[4844]: I0126 13:21:42.399330 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:21:42 crc kubenswrapper[4844]: I0126 13:21:42.429806 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 13:21:42 crc kubenswrapper[4844]: I0126 13:21:42.738577 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:21:42 crc kubenswrapper[4844]: I0126 13:21:42.815908 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-584dfd9675-8wzdw"] Jan 26 13:21:42 crc kubenswrapper[4844]: I0126 13:21:42.816146 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" podUID="955c4df0-924d-439d-8a58-66f49e93cf44" containerName="dnsmasq-dns" containerID="cri-o://dd4ef9896a032c4f099137976f07aecb620fb6a4975a0ab3dfd0a22073c86bdc" gracePeriod=10 Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.174775 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="48b824ba-48ac-4a25-85de-436c4dd6c016" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.174986 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="48b824ba-48ac-4a25-85de-436c4dd6c016" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.477942 4844 generic.go:334] "Generic (PLEG): container finished" podID="955c4df0-924d-439d-8a58-66f49e93cf44" containerID="dd4ef9896a032c4f099137976f07aecb620fb6a4975a0ab3dfd0a22073c86bdc" exitCode=0 Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.478020 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" event={"ID":"955c4df0-924d-439d-8a58-66f49e93cf44","Type":"ContainerDied","Data":"dd4ef9896a032c4f099137976f07aecb620fb6a4975a0ab3dfd0a22073c86bdc"} Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.478088 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" event={"ID":"955c4df0-924d-439d-8a58-66f49e93cf44","Type":"ContainerDied","Data":"369c0b626029733f327784afd8bffda814bd8530300a88f8529930dc66370c5e"} Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.479226 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="369c0b626029733f327784afd8bffda814bd8530300a88f8529930dc66370c5e" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.522188 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.626030 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-dns-swift-storage-0\") pod \"955c4df0-924d-439d-8a58-66f49e93cf44\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.626133 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-ovsdbserver-nb\") pod \"955c4df0-924d-439d-8a58-66f49e93cf44\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.626212 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-config\") pod \"955c4df0-924d-439d-8a58-66f49e93cf44\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.626260 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-dns-svc\") pod \"955c4df0-924d-439d-8a58-66f49e93cf44\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.626322 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-ovsdbserver-sb\") pod \"955c4df0-924d-439d-8a58-66f49e93cf44\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.626990 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rwh4\" (UniqueName: \"kubernetes.io/projected/955c4df0-924d-439d-8a58-66f49e93cf44-kube-api-access-5rwh4\") pod \"955c4df0-924d-439d-8a58-66f49e93cf44\" (UID: \"955c4df0-924d-439d-8a58-66f49e93cf44\") " Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.633746 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/955c4df0-924d-439d-8a58-66f49e93cf44-kube-api-access-5rwh4" (OuterVolumeSpecName: "kube-api-access-5rwh4") pod "955c4df0-924d-439d-8a58-66f49e93cf44" (UID: "955c4df0-924d-439d-8a58-66f49e93cf44"). InnerVolumeSpecName "kube-api-access-5rwh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.704634 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "955c4df0-924d-439d-8a58-66f49e93cf44" (UID: "955c4df0-924d-439d-8a58-66f49e93cf44"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.712958 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "955c4df0-924d-439d-8a58-66f49e93cf44" (UID: "955c4df0-924d-439d-8a58-66f49e93cf44"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.723780 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-config" (OuterVolumeSpecName: "config") pod "955c4df0-924d-439d-8a58-66f49e93cf44" (UID: "955c4df0-924d-439d-8a58-66f49e93cf44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.727388 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "955c4df0-924d-439d-8a58-66f49e93cf44" (UID: "955c4df0-924d-439d-8a58-66f49e93cf44"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.733162 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.733190 4844 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.733200 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rwh4\" (UniqueName: \"kubernetes.io/projected/955c4df0-924d-439d-8a58-66f49e93cf44-kube-api-access-5rwh4\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.733368 4844 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.733380 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.734394 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "955c4df0-924d-439d-8a58-66f49e93cf44" (UID: "955c4df0-924d-439d-8a58-66f49e93cf44"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.768804 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.768976 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.840688 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-btlm2" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.840724 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/955c4df0-924d-439d-8a58-66f49e93cf44-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.941888 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8sht\" (UniqueName: \"kubernetes.io/projected/1ac64bcd-c0e5-44c8-9c11-abede4806663-kube-api-access-c8sht\") pod \"1ac64bcd-c0e5-44c8-9c11-abede4806663\" (UID: \"1ac64bcd-c0e5-44c8-9c11-abede4806663\") " Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.942109 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac64bcd-c0e5-44c8-9c11-abede4806663-config-data\") pod \"1ac64bcd-c0e5-44c8-9c11-abede4806663\" (UID: \"1ac64bcd-c0e5-44c8-9c11-abede4806663\") " Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.942188 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac64bcd-c0e5-44c8-9c11-abede4806663-combined-ca-bundle\") pod \"1ac64bcd-c0e5-44c8-9c11-abede4806663\" (UID: \"1ac64bcd-c0e5-44c8-9c11-abede4806663\") " Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.942253 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac64bcd-c0e5-44c8-9c11-abede4806663-scripts\") pod \"1ac64bcd-c0e5-44c8-9c11-abede4806663\" (UID: \"1ac64bcd-c0e5-44c8-9c11-abede4806663\") " Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.945636 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ac64bcd-c0e5-44c8-9c11-abede4806663-kube-api-access-c8sht" (OuterVolumeSpecName: "kube-api-access-c8sht") pod "1ac64bcd-c0e5-44c8-9c11-abede4806663" (UID: "1ac64bcd-c0e5-44c8-9c11-abede4806663"). InnerVolumeSpecName "kube-api-access-c8sht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.948643 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac64bcd-c0e5-44c8-9c11-abede4806663-scripts" (OuterVolumeSpecName: "scripts") pod "1ac64bcd-c0e5-44c8-9c11-abede4806663" (UID: "1ac64bcd-c0e5-44c8-9c11-abede4806663"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.975896 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac64bcd-c0e5-44c8-9c11-abede4806663-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ac64bcd-c0e5-44c8-9c11-abede4806663" (UID: "1ac64bcd-c0e5-44c8-9c11-abede4806663"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:43 crc kubenswrapper[4844]: I0126 13:21:43.976332 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac64bcd-c0e5-44c8-9c11-abede4806663-config-data" (OuterVolumeSpecName: "config-data") pod "1ac64bcd-c0e5-44c8-9c11-abede4806663" (UID: "1ac64bcd-c0e5-44c8-9c11-abede4806663"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:44 crc kubenswrapper[4844]: I0126 13:21:44.045123 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac64bcd-c0e5-44c8-9c11-abede4806663-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:44 crc kubenswrapper[4844]: I0126 13:21:44.045153 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac64bcd-c0e5-44c8-9c11-abede4806663-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:44 crc kubenswrapper[4844]: I0126 13:21:44.045167 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac64bcd-c0e5-44c8-9c11-abede4806663-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:44 crc kubenswrapper[4844]: I0126 13:21:44.045177 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8sht\" (UniqueName: \"kubernetes.io/projected/1ac64bcd-c0e5-44c8-9c11-abede4806663-kube-api-access-c8sht\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:44 crc kubenswrapper[4844]: I0126 13:21:44.313008 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:21:44 crc kubenswrapper[4844]: E0126 13:21:44.313260 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:21:44 crc kubenswrapper[4844]: I0126 13:21:44.493007 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-584dfd9675-8wzdw" Jan 26 13:21:44 crc kubenswrapper[4844]: I0126 13:21:44.494709 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-btlm2" event={"ID":"1ac64bcd-c0e5-44c8-9c11-abede4806663","Type":"ContainerDied","Data":"284c3c69f7362ea398976b9330cfeee6691e77c9a30bf5f7c2b01f81fe894333"} Jan 26 13:21:44 crc kubenswrapper[4844]: I0126 13:21:44.494763 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="284c3c69f7362ea398976b9330cfeee6691e77c9a30bf5f7c2b01f81fe894333" Jan 26 13:21:44 crc kubenswrapper[4844]: I0126 13:21:44.494765 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-btlm2" Jan 26 13:21:44 crc kubenswrapper[4844]: I0126 13:21:44.543356 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-584dfd9675-8wzdw"] Jan 26 13:21:44 crc kubenswrapper[4844]: I0126 13:21:44.551719 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-584dfd9675-8wzdw"] Jan 26 13:21:44 crc kubenswrapper[4844]: I0126 13:21:44.597464 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:21:44 crc kubenswrapper[4844]: I0126 13:21:44.597667 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b0400142-4fe7-4b74-822f-eee67c1bf20b" containerName="nova-scheduler-scheduler" containerID="cri-o://6dbedc7c01c8acb0ccd15939896c968f133747087aeeb55a190225bdd020f833" gracePeriod=30 Jan 26 13:21:44 crc kubenswrapper[4844]: I0126 13:21:44.623074 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:21:44 crc kubenswrapper[4844]: I0126 13:21:44.623346 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="48b824ba-48ac-4a25-85de-436c4dd6c016" containerName="nova-api-log" containerID="cri-o://07438575207e3019b35bc534e4b3f87278f65af77c874f7c95ce5a8451d36b00" gracePeriod=30 Jan 26 13:21:44 crc kubenswrapper[4844]: I0126 13:21:44.623495 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="48b824ba-48ac-4a25-85de-436c4dd6c016" containerName="nova-api-api" containerID="cri-o://f0dbe3890d03ca2f5388cece06a190c1028be58ed639c24ab71abbc06c3d71c1" gracePeriod=30 Jan 26 13:21:44 crc kubenswrapper[4844]: I0126 13:21:44.636567 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:21:45 crc kubenswrapper[4844]: I0126 13:21:45.324846 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="955c4df0-924d-439d-8a58-66f49e93cf44" path="/var/lib/kubelet/pods/955c4df0-924d-439d-8a58-66f49e93cf44/volumes" Jan 26 13:21:45 crc kubenswrapper[4844]: I0126 13:21:45.502630 4844 generic.go:334] "Generic (PLEG): container finished" podID="48b824ba-48ac-4a25-85de-436c4dd6c016" containerID="07438575207e3019b35bc534e4b3f87278f65af77c874f7c95ce5a8451d36b00" exitCode=143 Jan 26 13:21:45 crc kubenswrapper[4844]: I0126 13:21:45.502708 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48b824ba-48ac-4a25-85de-436c4dd6c016","Type":"ContainerDied","Data":"07438575207e3019b35bc534e4b3f87278f65af77c874f7c95ce5a8451d36b00"} Jan 26 13:21:45 crc kubenswrapper[4844]: I0126 13:21:45.502818 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f43ace26-ee90-431e-b8ad-cf31b93c7fe3" containerName="nova-metadata-log" containerID="cri-o://8de8bb4e9abf3b7f22ec32d21e2dcdcd059f684885f3008155d689f29a775cc3" gracePeriod=30 Jan 26 13:21:45 crc kubenswrapper[4844]: I0126 13:21:45.502908 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f43ace26-ee90-431e-b8ad-cf31b93c7fe3" containerName="nova-metadata-metadata" containerID="cri-o://1667f1e72dba47ba014668f25f74f9896acf7e8783455c6e4fed0d57171dc24c" gracePeriod=30 Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.127412 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.298847 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-combined-ca-bundle\") pod \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.298937 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4dqk\" (UniqueName: \"kubernetes.io/projected/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-kube-api-access-b4dqk\") pod \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.299023 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-logs\") pod \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.299115 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-config-data\") pod \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.299134 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-nova-metadata-tls-certs\") pod \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\" (UID: \"f43ace26-ee90-431e-b8ad-cf31b93c7fe3\") " Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.301398 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-logs" (OuterVolumeSpecName: "logs") pod "f43ace26-ee90-431e-b8ad-cf31b93c7fe3" (UID: "f43ace26-ee90-431e-b8ad-cf31b93c7fe3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.309807 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-kube-api-access-b4dqk" (OuterVolumeSpecName: "kube-api-access-b4dqk") pod "f43ace26-ee90-431e-b8ad-cf31b93c7fe3" (UID: "f43ace26-ee90-431e-b8ad-cf31b93c7fe3"). InnerVolumeSpecName "kube-api-access-b4dqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.364406 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-config-data" (OuterVolumeSpecName: "config-data") pod "f43ace26-ee90-431e-b8ad-cf31b93c7fe3" (UID: "f43ace26-ee90-431e-b8ad-cf31b93c7fe3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.364748 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f43ace26-ee90-431e-b8ad-cf31b93c7fe3" (UID: "f43ace26-ee90-431e-b8ad-cf31b93c7fe3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.384048 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f43ace26-ee90-431e-b8ad-cf31b93c7fe3" (UID: "f43ace26-ee90-431e-b8ad-cf31b93c7fe3"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.400944 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.400981 4844 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.400991 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.401000 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4dqk\" (UniqueName: \"kubernetes.io/projected/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-kube-api-access-b4dqk\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.401008 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f43ace26-ee90-431e-b8ad-cf31b93c7fe3-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.513013 4844 generic.go:334] "Generic (PLEG): container finished" podID="0f37882c-17e3-4c70-a309-ee70392fed88" containerID="1bb4993fb205800439b0a8823ccb1d8840270fab753df601ee0cb69703f656d8" exitCode=0 Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.513089 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9gsdl" event={"ID":"0f37882c-17e3-4c70-a309-ee70392fed88","Type":"ContainerDied","Data":"1bb4993fb205800439b0a8823ccb1d8840270fab753df601ee0cb69703f656d8"} Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.515795 4844 generic.go:334] "Generic (PLEG): container finished" podID="f43ace26-ee90-431e-b8ad-cf31b93c7fe3" containerID="1667f1e72dba47ba014668f25f74f9896acf7e8783455c6e4fed0d57171dc24c" exitCode=0 Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.515832 4844 generic.go:334] "Generic (PLEG): container finished" podID="f43ace26-ee90-431e-b8ad-cf31b93c7fe3" containerID="8de8bb4e9abf3b7f22ec32d21e2dcdcd059f684885f3008155d689f29a775cc3" exitCode=143 Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.515860 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f43ace26-ee90-431e-b8ad-cf31b93c7fe3","Type":"ContainerDied","Data":"1667f1e72dba47ba014668f25f74f9896acf7e8783455c6e4fed0d57171dc24c"} Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.515873 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.515899 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f43ace26-ee90-431e-b8ad-cf31b93c7fe3","Type":"ContainerDied","Data":"8de8bb4e9abf3b7f22ec32d21e2dcdcd059f684885f3008155d689f29a775cc3"} Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.515919 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f43ace26-ee90-431e-b8ad-cf31b93c7fe3","Type":"ContainerDied","Data":"214f14861e2f540b01e5f28353bec2065d1a0871c6c92e8cd1ecbd6b195f71a3"} Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.515936 4844 scope.go:117] "RemoveContainer" containerID="1667f1e72dba47ba014668f25f74f9896acf7e8783455c6e4fed0d57171dc24c" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.544021 4844 scope.go:117] "RemoveContainer" containerID="8de8bb4e9abf3b7f22ec32d21e2dcdcd059f684885f3008155d689f29a775cc3" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.583676 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.583786 4844 scope.go:117] "RemoveContainer" containerID="1667f1e72dba47ba014668f25f74f9896acf7e8783455c6e4fed0d57171dc24c" Jan 26 13:21:46 crc kubenswrapper[4844]: E0126 13:21:46.585013 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1667f1e72dba47ba014668f25f74f9896acf7e8783455c6e4fed0d57171dc24c\": container with ID starting with 1667f1e72dba47ba014668f25f74f9896acf7e8783455c6e4fed0d57171dc24c not found: ID does not exist" containerID="1667f1e72dba47ba014668f25f74f9896acf7e8783455c6e4fed0d57171dc24c" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.585057 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1667f1e72dba47ba014668f25f74f9896acf7e8783455c6e4fed0d57171dc24c"} err="failed to get container status \"1667f1e72dba47ba014668f25f74f9896acf7e8783455c6e4fed0d57171dc24c\": rpc error: code = NotFound desc = could not find container \"1667f1e72dba47ba014668f25f74f9896acf7e8783455c6e4fed0d57171dc24c\": container with ID starting with 1667f1e72dba47ba014668f25f74f9896acf7e8783455c6e4fed0d57171dc24c not found: ID does not exist" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.585087 4844 scope.go:117] "RemoveContainer" containerID="8de8bb4e9abf3b7f22ec32d21e2dcdcd059f684885f3008155d689f29a775cc3" Jan 26 13:21:46 crc kubenswrapper[4844]: E0126 13:21:46.585426 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8de8bb4e9abf3b7f22ec32d21e2dcdcd059f684885f3008155d689f29a775cc3\": container with ID starting with 8de8bb4e9abf3b7f22ec32d21e2dcdcd059f684885f3008155d689f29a775cc3 not found: ID does not exist" containerID="8de8bb4e9abf3b7f22ec32d21e2dcdcd059f684885f3008155d689f29a775cc3" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.585453 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8de8bb4e9abf3b7f22ec32d21e2dcdcd059f684885f3008155d689f29a775cc3"} err="failed to get container status \"8de8bb4e9abf3b7f22ec32d21e2dcdcd059f684885f3008155d689f29a775cc3\": rpc error: code = NotFound desc = could not find container \"8de8bb4e9abf3b7f22ec32d21e2dcdcd059f684885f3008155d689f29a775cc3\": container with ID starting with 8de8bb4e9abf3b7f22ec32d21e2dcdcd059f684885f3008155d689f29a775cc3 not found: ID does not exist" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.585473 4844 scope.go:117] "RemoveContainer" containerID="1667f1e72dba47ba014668f25f74f9896acf7e8783455c6e4fed0d57171dc24c" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.586199 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1667f1e72dba47ba014668f25f74f9896acf7e8783455c6e4fed0d57171dc24c"} err="failed to get container status \"1667f1e72dba47ba014668f25f74f9896acf7e8783455c6e4fed0d57171dc24c\": rpc error: code = NotFound desc = could not find container \"1667f1e72dba47ba014668f25f74f9896acf7e8783455c6e4fed0d57171dc24c\": container with ID starting with 1667f1e72dba47ba014668f25f74f9896acf7e8783455c6e4fed0d57171dc24c not found: ID does not exist" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.586245 4844 scope.go:117] "RemoveContainer" containerID="8de8bb4e9abf3b7f22ec32d21e2dcdcd059f684885f3008155d689f29a775cc3" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.586571 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8de8bb4e9abf3b7f22ec32d21e2dcdcd059f684885f3008155d689f29a775cc3"} err="failed to get container status \"8de8bb4e9abf3b7f22ec32d21e2dcdcd059f684885f3008155d689f29a775cc3\": rpc error: code = NotFound desc = could not find container \"8de8bb4e9abf3b7f22ec32d21e2dcdcd059f684885f3008155d689f29a775cc3\": container with ID starting with 8de8bb4e9abf3b7f22ec32d21e2dcdcd059f684885f3008155d689f29a775cc3 not found: ID does not exist" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.617140 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.626483 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:21:46 crc kubenswrapper[4844]: E0126 13:21:46.627659 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="955c4df0-924d-439d-8a58-66f49e93cf44" containerName="dnsmasq-dns" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.627704 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="955c4df0-924d-439d-8a58-66f49e93cf44" containerName="dnsmasq-dns" Jan 26 13:21:46 crc kubenswrapper[4844]: E0126 13:21:46.627718 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac64bcd-c0e5-44c8-9c11-abede4806663" containerName="nova-manage" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.627726 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac64bcd-c0e5-44c8-9c11-abede4806663" containerName="nova-manage" Jan 26 13:21:46 crc kubenswrapper[4844]: E0126 13:21:46.627752 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43ace26-ee90-431e-b8ad-cf31b93c7fe3" containerName="nova-metadata-log" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.627760 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43ace26-ee90-431e-b8ad-cf31b93c7fe3" containerName="nova-metadata-log" Jan 26 13:21:46 crc kubenswrapper[4844]: E0126 13:21:46.627782 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="955c4df0-924d-439d-8a58-66f49e93cf44" containerName="init" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.627789 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="955c4df0-924d-439d-8a58-66f49e93cf44" containerName="init" Jan 26 13:21:46 crc kubenswrapper[4844]: E0126 13:21:46.627806 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43ace26-ee90-431e-b8ad-cf31b93c7fe3" containerName="nova-metadata-metadata" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.627813 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43ace26-ee90-431e-b8ad-cf31b93c7fe3" containerName="nova-metadata-metadata" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.628207 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ac64bcd-c0e5-44c8-9c11-abede4806663" containerName="nova-manage" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.628227 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="955c4df0-924d-439d-8a58-66f49e93cf44" containerName="dnsmasq-dns" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.628238 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="f43ace26-ee90-431e-b8ad-cf31b93c7fe3" containerName="nova-metadata-log" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.628255 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="f43ace26-ee90-431e-b8ad-cf31b93c7fe3" containerName="nova-metadata-metadata" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.631537 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.635639 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.640120 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.652563 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.823226 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-config-data\") pod \"nova-metadata-0\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " pod="openstack/nova-metadata-0" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.823298 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " pod="openstack/nova-metadata-0" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.823390 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-logs\") pod \"nova-metadata-0\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " pod="openstack/nova-metadata-0" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.823414 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " pod="openstack/nova-metadata-0" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.823480 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jkjp\" (UniqueName: \"kubernetes.io/projected/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-kube-api-access-6jkjp\") pod \"nova-metadata-0\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " pod="openstack/nova-metadata-0" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.926151 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " pod="openstack/nova-metadata-0" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.926399 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-logs\") pod \"nova-metadata-0\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " pod="openstack/nova-metadata-0" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.926466 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " pod="openstack/nova-metadata-0" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.926665 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jkjp\" (UniqueName: \"kubernetes.io/projected/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-kube-api-access-6jkjp\") pod \"nova-metadata-0\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " pod="openstack/nova-metadata-0" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.926847 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-config-data\") pod \"nova-metadata-0\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " pod="openstack/nova-metadata-0" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.927263 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-logs\") pod \"nova-metadata-0\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " pod="openstack/nova-metadata-0" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.931153 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-config-data\") pod \"nova-metadata-0\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " pod="openstack/nova-metadata-0" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.931372 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " pod="openstack/nova-metadata-0" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.935487 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " pod="openstack/nova-metadata-0" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.947822 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jkjp\" (UniqueName: \"kubernetes.io/projected/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-kube-api-access-6jkjp\") pod \"nova-metadata-0\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " pod="openstack/nova-metadata-0" Jan 26 13:21:46 crc kubenswrapper[4844]: I0126 13:21:46.992465 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:21:47 crc kubenswrapper[4844]: E0126 13:21:47.107408 4844 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6dbedc7c01c8acb0ccd15939896c968f133747087aeeb55a190225bdd020f833" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 13:21:47 crc kubenswrapper[4844]: E0126 13:21:47.109823 4844 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6dbedc7c01c8acb0ccd15939896c968f133747087aeeb55a190225bdd020f833" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 13:21:47 crc kubenswrapper[4844]: E0126 13:21:47.111372 4844 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6dbedc7c01c8acb0ccd15939896c968f133747087aeeb55a190225bdd020f833" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 13:21:47 crc kubenswrapper[4844]: E0126 13:21:47.111446 4844 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b0400142-4fe7-4b74-822f-eee67c1bf20b" containerName="nova-scheduler-scheduler" Jan 26 13:21:47 crc kubenswrapper[4844]: I0126 13:21:47.333744 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f43ace26-ee90-431e-b8ad-cf31b93c7fe3" path="/var/lib/kubelet/pods/f43ace26-ee90-431e-b8ad-cf31b93c7fe3/volumes" Jan 26 13:21:47 crc kubenswrapper[4844]: I0126 13:21:47.510935 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:21:47 crc kubenswrapper[4844]: I0126 13:21:47.526212 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67","Type":"ContainerStarted","Data":"812c4a987be88c8f7cf5b337580367aa6704bd0c41d676e57074a89d671ac56c"} Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.022635 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9gsdl" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.159624 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f37882c-17e3-4c70-a309-ee70392fed88-scripts\") pod \"0f37882c-17e3-4c70-a309-ee70392fed88\" (UID: \"0f37882c-17e3-4c70-a309-ee70392fed88\") " Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.159813 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f37882c-17e3-4c70-a309-ee70392fed88-combined-ca-bundle\") pod \"0f37882c-17e3-4c70-a309-ee70392fed88\" (UID: \"0f37882c-17e3-4c70-a309-ee70392fed88\") " Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.159966 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdnt4\" (UniqueName: \"kubernetes.io/projected/0f37882c-17e3-4c70-a309-ee70392fed88-kube-api-access-bdnt4\") pod \"0f37882c-17e3-4c70-a309-ee70392fed88\" (UID: \"0f37882c-17e3-4c70-a309-ee70392fed88\") " Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.160045 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f37882c-17e3-4c70-a309-ee70392fed88-config-data\") pod \"0f37882c-17e3-4c70-a309-ee70392fed88\" (UID: \"0f37882c-17e3-4c70-a309-ee70392fed88\") " Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.164398 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f37882c-17e3-4c70-a309-ee70392fed88-scripts" (OuterVolumeSpecName: "scripts") pod "0f37882c-17e3-4c70-a309-ee70392fed88" (UID: "0f37882c-17e3-4c70-a309-ee70392fed88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.166749 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f37882c-17e3-4c70-a309-ee70392fed88-kube-api-access-bdnt4" (OuterVolumeSpecName: "kube-api-access-bdnt4") pod "0f37882c-17e3-4c70-a309-ee70392fed88" (UID: "0f37882c-17e3-4c70-a309-ee70392fed88"). InnerVolumeSpecName "kube-api-access-bdnt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.191475 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f37882c-17e3-4c70-a309-ee70392fed88-config-data" (OuterVolumeSpecName: "config-data") pod "0f37882c-17e3-4c70-a309-ee70392fed88" (UID: "0f37882c-17e3-4c70-a309-ee70392fed88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.194718 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f37882c-17e3-4c70-a309-ee70392fed88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f37882c-17e3-4c70-a309-ee70392fed88" (UID: "0f37882c-17e3-4c70-a309-ee70392fed88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.200807 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.262053 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b824ba-48ac-4a25-85de-436c4dd6c016-config-data\") pod \"48b824ba-48ac-4a25-85de-436c4dd6c016\" (UID: \"48b824ba-48ac-4a25-85de-436c4dd6c016\") " Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.262234 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b824ba-48ac-4a25-85de-436c4dd6c016-combined-ca-bundle\") pod \"48b824ba-48ac-4a25-85de-436c4dd6c016\" (UID: \"48b824ba-48ac-4a25-85de-436c4dd6c016\") " Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.262320 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9k2v\" (UniqueName: \"kubernetes.io/projected/48b824ba-48ac-4a25-85de-436c4dd6c016-kube-api-access-m9k2v\") pod \"48b824ba-48ac-4a25-85de-436c4dd6c016\" (UID: \"48b824ba-48ac-4a25-85de-436c4dd6c016\") " Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.262375 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48b824ba-48ac-4a25-85de-436c4dd6c016-logs\") pod \"48b824ba-48ac-4a25-85de-436c4dd6c016\" (UID: \"48b824ba-48ac-4a25-85de-436c4dd6c016\") " Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.262967 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f37882c-17e3-4c70-a309-ee70392fed88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.262995 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdnt4\" (UniqueName: \"kubernetes.io/projected/0f37882c-17e3-4c70-a309-ee70392fed88-kube-api-access-bdnt4\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.263010 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f37882c-17e3-4c70-a309-ee70392fed88-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.263021 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f37882c-17e3-4c70-a309-ee70392fed88-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.263528 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48b824ba-48ac-4a25-85de-436c4dd6c016-logs" (OuterVolumeSpecName: "logs") pod "48b824ba-48ac-4a25-85de-436c4dd6c016" (UID: "48b824ba-48ac-4a25-85de-436c4dd6c016"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.265417 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48b824ba-48ac-4a25-85de-436c4dd6c016-kube-api-access-m9k2v" (OuterVolumeSpecName: "kube-api-access-m9k2v") pod "48b824ba-48ac-4a25-85de-436c4dd6c016" (UID: "48b824ba-48ac-4a25-85de-436c4dd6c016"). InnerVolumeSpecName "kube-api-access-m9k2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.286793 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48b824ba-48ac-4a25-85de-436c4dd6c016-config-data" (OuterVolumeSpecName: "config-data") pod "48b824ba-48ac-4a25-85de-436c4dd6c016" (UID: "48b824ba-48ac-4a25-85de-436c4dd6c016"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.296899 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48b824ba-48ac-4a25-85de-436c4dd6c016-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48b824ba-48ac-4a25-85de-436c4dd6c016" (UID: "48b824ba-48ac-4a25-85de-436c4dd6c016"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.366530 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9k2v\" (UniqueName: \"kubernetes.io/projected/48b824ba-48ac-4a25-85de-436c4dd6c016-kube-api-access-m9k2v\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.366576 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48b824ba-48ac-4a25-85de-436c4dd6c016-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.366617 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b824ba-48ac-4a25-85de-436c4dd6c016-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.366639 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b824ba-48ac-4a25-85de-436c4dd6c016-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.536792 4844 generic.go:334] "Generic (PLEG): container finished" podID="b0400142-4fe7-4b74-822f-eee67c1bf20b" containerID="6dbedc7c01c8acb0ccd15939896c968f133747087aeeb55a190225bdd020f833" exitCode=0 Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.537002 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0400142-4fe7-4b74-822f-eee67c1bf20b","Type":"ContainerDied","Data":"6dbedc7c01c8acb0ccd15939896c968f133747087aeeb55a190225bdd020f833"} Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.538759 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9gsdl" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.538775 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9gsdl" event={"ID":"0f37882c-17e3-4c70-a309-ee70392fed88","Type":"ContainerDied","Data":"29f27a68ca66e0345bf83d223bc44520bcc88d1f130147050c0d95c54c8304cf"} Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.538805 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29f27a68ca66e0345bf83d223bc44520bcc88d1f130147050c0d95c54c8304cf" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.541151 4844 generic.go:334] "Generic (PLEG): container finished" podID="48b824ba-48ac-4a25-85de-436c4dd6c016" containerID="f0dbe3890d03ca2f5388cece06a190c1028be58ed639c24ab71abbc06c3d71c1" exitCode=0 Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.541228 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.541231 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48b824ba-48ac-4a25-85de-436c4dd6c016","Type":"ContainerDied","Data":"f0dbe3890d03ca2f5388cece06a190c1028be58ed639c24ab71abbc06c3d71c1"} Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.541353 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48b824ba-48ac-4a25-85de-436c4dd6c016","Type":"ContainerDied","Data":"e0bff44e2c85a247483f7bf9a7b55cb2007b6c7f16dd0085b196e0786322e2a6"} Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.541373 4844 scope.go:117] "RemoveContainer" containerID="f0dbe3890d03ca2f5388cece06a190c1028be58ed639c24ab71abbc06c3d71c1" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.545395 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67","Type":"ContainerStarted","Data":"0b3e9cfd506d1d008fe20e0b66a3b7fe3232162f525567e8765b778453fc42f5"} Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.545440 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67","Type":"ContainerStarted","Data":"bb906dacda948788b140e028e27afb181f6ba4bf6c363c83ef3924519bb24ea8"} Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.584508 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.584487536 podStartE2EDuration="2.584487536s" podCreationTimestamp="2026-01-26 13:21:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:21:48.572379863 +0000 UTC m=+2285.505747475" watchObservedRunningTime="2026-01-26 13:21:48.584487536 +0000 UTC m=+2285.517855148" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.607114 4844 scope.go:117] "RemoveContainer" containerID="07438575207e3019b35bc534e4b3f87278f65af77c874f7c95ce5a8451d36b00" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.620273 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.645688 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.661676 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 13:21:48 crc kubenswrapper[4844]: E0126 13:21:48.662239 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f37882c-17e3-4c70-a309-ee70392fed88" containerName="nova-cell1-conductor-db-sync" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.662258 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f37882c-17e3-4c70-a309-ee70392fed88" containerName="nova-cell1-conductor-db-sync" Jan 26 13:21:48 crc kubenswrapper[4844]: E0126 13:21:48.662279 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b824ba-48ac-4a25-85de-436c4dd6c016" containerName="nova-api-api" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.662287 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b824ba-48ac-4a25-85de-436c4dd6c016" containerName="nova-api-api" Jan 26 13:21:48 crc kubenswrapper[4844]: E0126 13:21:48.662332 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b824ba-48ac-4a25-85de-436c4dd6c016" containerName="nova-api-log" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.662341 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b824ba-48ac-4a25-85de-436c4dd6c016" containerName="nova-api-log" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.662583 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="48b824ba-48ac-4a25-85de-436c4dd6c016" containerName="nova-api-log" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.662630 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f37882c-17e3-4c70-a309-ee70392fed88" containerName="nova-cell1-conductor-db-sync" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.662649 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="48b824ba-48ac-4a25-85de-436c4dd6c016" containerName="nova-api-api" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.664015 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.670504 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.673451 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.686252 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.687865 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.691773 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.696244 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.716960 4844 scope.go:117] "RemoveContainer" containerID="f0dbe3890d03ca2f5388cece06a190c1028be58ed639c24ab71abbc06c3d71c1" Jan 26 13:21:48 crc kubenswrapper[4844]: E0126 13:21:48.718315 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0dbe3890d03ca2f5388cece06a190c1028be58ed639c24ab71abbc06c3d71c1\": container with ID starting with f0dbe3890d03ca2f5388cece06a190c1028be58ed639c24ab71abbc06c3d71c1 not found: ID does not exist" containerID="f0dbe3890d03ca2f5388cece06a190c1028be58ed639c24ab71abbc06c3d71c1" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.718370 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0dbe3890d03ca2f5388cece06a190c1028be58ed639c24ab71abbc06c3d71c1"} err="failed to get container status \"f0dbe3890d03ca2f5388cece06a190c1028be58ed639c24ab71abbc06c3d71c1\": rpc error: code = NotFound desc = could not find container \"f0dbe3890d03ca2f5388cece06a190c1028be58ed639c24ab71abbc06c3d71c1\": container with ID starting with f0dbe3890d03ca2f5388cece06a190c1028be58ed639c24ab71abbc06c3d71c1 not found: ID does not exist" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.718403 4844 scope.go:117] "RemoveContainer" containerID="07438575207e3019b35bc534e4b3f87278f65af77c874f7c95ce5a8451d36b00" Jan 26 13:21:48 crc kubenswrapper[4844]: E0126 13:21:48.718786 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07438575207e3019b35bc534e4b3f87278f65af77c874f7c95ce5a8451d36b00\": container with ID starting with 07438575207e3019b35bc534e4b3f87278f65af77c874f7c95ce5a8451d36b00 not found: ID does not exist" containerID="07438575207e3019b35bc534e4b3f87278f65af77c874f7c95ce5a8451d36b00" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.718832 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07438575207e3019b35bc534e4b3f87278f65af77c874f7c95ce5a8451d36b00"} err="failed to get container status \"07438575207e3019b35bc534e4b3f87278f65af77c874f7c95ce5a8451d36b00\": rpc error: code = NotFound desc = could not find container \"07438575207e3019b35bc534e4b3f87278f65af77c874f7c95ce5a8451d36b00\": container with ID starting with 07438575207e3019b35bc534e4b3f87278f65af77c874f7c95ce5a8451d36b00 not found: ID does not exist" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.774026 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-config-data\") pod \"nova-api-0\" (UID: \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\") " pod="openstack/nova-api-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.774080 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\") " pod="openstack/nova-api-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.774151 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ft9b\" (UniqueName: \"kubernetes.io/projected/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-kube-api-access-9ft9b\") pod \"nova-api-0\" (UID: \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\") " pod="openstack/nova-api-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.774200 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-logs\") pod \"nova-api-0\" (UID: \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\") " pod="openstack/nova-api-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.876581 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ft9b\" (UniqueName: \"kubernetes.io/projected/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-kube-api-access-9ft9b\") pod \"nova-api-0\" (UID: \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\") " pod="openstack/nova-api-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.876720 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-logs\") pod \"nova-api-0\" (UID: \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\") " pod="openstack/nova-api-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.876817 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7e97d6-1a33-4c98-87bb-6c4d451121b6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"dc7e97d6-1a33-4c98-87bb-6c4d451121b6\") " pod="openstack/nova-cell1-conductor-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.877108 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7e97d6-1a33-4c98-87bb-6c4d451121b6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"dc7e97d6-1a33-4c98-87bb-6c4d451121b6\") " pod="openstack/nova-cell1-conductor-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.877201 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct67j\" (UniqueName: \"kubernetes.io/projected/dc7e97d6-1a33-4c98-87bb-6c4d451121b6-kube-api-access-ct67j\") pod \"nova-cell1-conductor-0\" (UID: \"dc7e97d6-1a33-4c98-87bb-6c4d451121b6\") " pod="openstack/nova-cell1-conductor-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.877230 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-config-data\") pod \"nova-api-0\" (UID: \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\") " pod="openstack/nova-api-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.877330 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\") " pod="openstack/nova-api-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.878753 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-logs\") pod \"nova-api-0\" (UID: \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\") " pod="openstack/nova-api-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.898302 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ft9b\" (UniqueName: \"kubernetes.io/projected/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-kube-api-access-9ft9b\") pod \"nova-api-0\" (UID: \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\") " pod="openstack/nova-api-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.900136 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\") " pod="openstack/nova-api-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.903661 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-config-data\") pod \"nova-api-0\" (UID: \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\") " pod="openstack/nova-api-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.971407 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.978762 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7e97d6-1a33-4c98-87bb-6c4d451121b6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"dc7e97d6-1a33-4c98-87bb-6c4d451121b6\") " pod="openstack/nova-cell1-conductor-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.978805 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct67j\" (UniqueName: \"kubernetes.io/projected/dc7e97d6-1a33-4c98-87bb-6c4d451121b6-kube-api-access-ct67j\") pod \"nova-cell1-conductor-0\" (UID: \"dc7e97d6-1a33-4c98-87bb-6c4d451121b6\") " pod="openstack/nova-cell1-conductor-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.978897 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7e97d6-1a33-4c98-87bb-6c4d451121b6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"dc7e97d6-1a33-4c98-87bb-6c4d451121b6\") " pod="openstack/nova-cell1-conductor-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.982374 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7e97d6-1a33-4c98-87bb-6c4d451121b6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"dc7e97d6-1a33-4c98-87bb-6c4d451121b6\") " pod="openstack/nova-cell1-conductor-0" Jan 26 13:21:48 crc kubenswrapper[4844]: I0126 13:21:48.982847 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7e97d6-1a33-4c98-87bb-6c4d451121b6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"dc7e97d6-1a33-4c98-87bb-6c4d451121b6\") " pod="openstack/nova-cell1-conductor-0" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.010544 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct67j\" (UniqueName: \"kubernetes.io/projected/dc7e97d6-1a33-4c98-87bb-6c4d451121b6-kube-api-access-ct67j\") pod \"nova-cell1-conductor-0\" (UID: \"dc7e97d6-1a33-4c98-87bb-6c4d451121b6\") " pod="openstack/nova-cell1-conductor-0" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.013526 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.026788 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.080406 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bgcn\" (UniqueName: \"kubernetes.io/projected/b0400142-4fe7-4b74-822f-eee67c1bf20b-kube-api-access-2bgcn\") pod \"b0400142-4fe7-4b74-822f-eee67c1bf20b\" (UID: \"b0400142-4fe7-4b74-822f-eee67c1bf20b\") " Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.080450 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0400142-4fe7-4b74-822f-eee67c1bf20b-config-data\") pod \"b0400142-4fe7-4b74-822f-eee67c1bf20b\" (UID: \"b0400142-4fe7-4b74-822f-eee67c1bf20b\") " Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.080728 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0400142-4fe7-4b74-822f-eee67c1bf20b-combined-ca-bundle\") pod \"b0400142-4fe7-4b74-822f-eee67c1bf20b\" (UID: \"b0400142-4fe7-4b74-822f-eee67c1bf20b\") " Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.097676 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0400142-4fe7-4b74-822f-eee67c1bf20b-kube-api-access-2bgcn" (OuterVolumeSpecName: "kube-api-access-2bgcn") pod "b0400142-4fe7-4b74-822f-eee67c1bf20b" (UID: "b0400142-4fe7-4b74-822f-eee67c1bf20b"). InnerVolumeSpecName "kube-api-access-2bgcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.113821 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0400142-4fe7-4b74-822f-eee67c1bf20b-config-data" (OuterVolumeSpecName: "config-data") pod "b0400142-4fe7-4b74-822f-eee67c1bf20b" (UID: "b0400142-4fe7-4b74-822f-eee67c1bf20b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.114922 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0400142-4fe7-4b74-822f-eee67c1bf20b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0400142-4fe7-4b74-822f-eee67c1bf20b" (UID: "b0400142-4fe7-4b74-822f-eee67c1bf20b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.183209 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bgcn\" (UniqueName: \"kubernetes.io/projected/b0400142-4fe7-4b74-822f-eee67c1bf20b-kube-api-access-2bgcn\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.183454 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0400142-4fe7-4b74-822f-eee67c1bf20b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.183471 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0400142-4fe7-4b74-822f-eee67c1bf20b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.325622 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48b824ba-48ac-4a25-85de-436c4dd6c016" path="/var/lib/kubelet/pods/48b824ba-48ac-4a25-85de-436c4dd6c016/volumes" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.538406 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.561645 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0400142-4fe7-4b74-822f-eee67c1bf20b","Type":"ContainerDied","Data":"98d220bfbe1aa32cc412dd383ab85d675f5de3c566ed1251dfd4e4f8a21a1ed1"} Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.561699 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.561726 4844 scope.go:117] "RemoveContainer" containerID="6dbedc7c01c8acb0ccd15939896c968f133747087aeeb55a190225bdd020f833" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.564070 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"dc7e97d6-1a33-4c98-87bb-6c4d451121b6","Type":"ContainerStarted","Data":"24ed1dec074423488a7a1e093f7e2458567984befac91fe93f6a08a6c66aa0b6"} Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.616646 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.628846 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.649979 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.663653 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:21:49 crc kubenswrapper[4844]: E0126 13:21:49.664051 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0400142-4fe7-4b74-822f-eee67c1bf20b" containerName="nova-scheduler-scheduler" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.664069 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0400142-4fe7-4b74-822f-eee67c1bf20b" containerName="nova-scheduler-scheduler" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.664277 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0400142-4fe7-4b74-822f-eee67c1bf20b" containerName="nova-scheduler-scheduler" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.665066 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.668731 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.696178 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.796104 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c1f674-6004-46ed-ad61-cbad8e9cb195-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a7c1f674-6004-46ed-ad61-cbad8e9cb195\") " pod="openstack/nova-scheduler-0" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.796223 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dcdk\" (UniqueName: \"kubernetes.io/projected/a7c1f674-6004-46ed-ad61-cbad8e9cb195-kube-api-access-9dcdk\") pod \"nova-scheduler-0\" (UID: \"a7c1f674-6004-46ed-ad61-cbad8e9cb195\") " pod="openstack/nova-scheduler-0" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.796304 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7c1f674-6004-46ed-ad61-cbad8e9cb195-config-data\") pod \"nova-scheduler-0\" (UID: \"a7c1f674-6004-46ed-ad61-cbad8e9cb195\") " pod="openstack/nova-scheduler-0" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.897610 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7c1f674-6004-46ed-ad61-cbad8e9cb195-config-data\") pod \"nova-scheduler-0\" (UID: \"a7c1f674-6004-46ed-ad61-cbad8e9cb195\") " pod="openstack/nova-scheduler-0" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.897726 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c1f674-6004-46ed-ad61-cbad8e9cb195-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a7c1f674-6004-46ed-ad61-cbad8e9cb195\") " pod="openstack/nova-scheduler-0" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.897783 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dcdk\" (UniqueName: \"kubernetes.io/projected/a7c1f674-6004-46ed-ad61-cbad8e9cb195-kube-api-access-9dcdk\") pod \"nova-scheduler-0\" (UID: \"a7c1f674-6004-46ed-ad61-cbad8e9cb195\") " pod="openstack/nova-scheduler-0" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.903366 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c1f674-6004-46ed-ad61-cbad8e9cb195-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a7c1f674-6004-46ed-ad61-cbad8e9cb195\") " pod="openstack/nova-scheduler-0" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.903918 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7c1f674-6004-46ed-ad61-cbad8e9cb195-config-data\") pod \"nova-scheduler-0\" (UID: \"a7c1f674-6004-46ed-ad61-cbad8e9cb195\") " pod="openstack/nova-scheduler-0" Jan 26 13:21:49 crc kubenswrapper[4844]: I0126 13:21:49.912749 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dcdk\" (UniqueName: \"kubernetes.io/projected/a7c1f674-6004-46ed-ad61-cbad8e9cb195-kube-api-access-9dcdk\") pod \"nova-scheduler-0\" (UID: \"a7c1f674-6004-46ed-ad61-cbad8e9cb195\") " pod="openstack/nova-scheduler-0" Jan 26 13:21:50 crc kubenswrapper[4844]: I0126 13:21:50.100185 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:21:50 crc kubenswrapper[4844]: I0126 13:21:50.576419 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"dc7e97d6-1a33-4c98-87bb-6c4d451121b6","Type":"ContainerStarted","Data":"23ceb6f00f840563fe887ed127e07403e1172cb082cfb2f0efa9c95ab6306b11"} Jan 26 13:21:50 crc kubenswrapper[4844]: I0126 13:21:50.576839 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 26 13:21:50 crc kubenswrapper[4844]: I0126 13:21:50.585881 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a","Type":"ContainerStarted","Data":"c7e82dc15a0d63844059c67efc32d078b7e8f4863e63c8006e6e67513233cc19"} Jan 26 13:21:50 crc kubenswrapper[4844]: I0126 13:21:50.585950 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a","Type":"ContainerStarted","Data":"5734668b21517868750643618ba5f82624f66a29accbe6460a39ce71e8d82fd3"} Jan 26 13:21:50 crc kubenswrapper[4844]: I0126 13:21:50.585972 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a","Type":"ContainerStarted","Data":"fbd806fd859d49e3185a5efb4af4defd2d5a3c5524028df6ad0e4102694dea54"} Jan 26 13:21:50 crc kubenswrapper[4844]: I0126 13:21:50.605188 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:21:50 crc kubenswrapper[4844]: I0126 13:21:50.613227 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.613206552 podStartE2EDuration="2.613206552s" podCreationTimestamp="2026-01-26 13:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:21:50.593086515 +0000 UTC m=+2287.526454147" watchObservedRunningTime="2026-01-26 13:21:50.613206552 +0000 UTC m=+2287.546574184" Jan 26 13:21:50 crc kubenswrapper[4844]: I0126 13:21:50.625115 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.625088268 podStartE2EDuration="2.625088268s" podCreationTimestamp="2026-01-26 13:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:21:50.610028125 +0000 UTC m=+2287.543395737" watchObservedRunningTime="2026-01-26 13:21:50.625088268 +0000 UTC m=+2287.558455900" Jan 26 13:21:51 crc kubenswrapper[4844]: I0126 13:21:51.339762 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0400142-4fe7-4b74-822f-eee67c1bf20b" path="/var/lib/kubelet/pods/b0400142-4fe7-4b74-822f-eee67c1bf20b/volumes" Jan 26 13:21:51 crc kubenswrapper[4844]: I0126 13:21:51.617396 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7c1f674-6004-46ed-ad61-cbad8e9cb195","Type":"ContainerStarted","Data":"3758f7daf2748afde33dc84856c78e70d5af348f404e8862bd340a67fe9034cb"} Jan 26 13:21:51 crc kubenswrapper[4844]: I0126 13:21:51.617459 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7c1f674-6004-46ed-ad61-cbad8e9cb195","Type":"ContainerStarted","Data":"ed635a56cf5d075e4ef31d3d72dac58cef9ee6ba2e408cbd6f9e9b7b0d40cad0"} Jan 26 13:21:51 crc kubenswrapper[4844]: I0126 13:21:51.673669 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.673642438 podStartE2EDuration="2.673642438s" podCreationTimestamp="2026-01-26 13:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:21:51.642959826 +0000 UTC m=+2288.576327448" watchObservedRunningTime="2026-01-26 13:21:51.673642438 +0000 UTC m=+2288.607010070" Jan 26 13:21:51 crc kubenswrapper[4844]: I0126 13:21:51.993540 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 13:21:51 crc kubenswrapper[4844]: I0126 13:21:51.993615 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 13:21:52 crc kubenswrapper[4844]: I0126 13:21:52.521041 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 13:21:54 crc kubenswrapper[4844]: I0126 13:21:54.067840 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 26 13:21:55 crc kubenswrapper[4844]: I0126 13:21:55.100884 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 13:21:56 crc kubenswrapper[4844]: I0126 13:21:56.348794 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 13:21:56 crc kubenswrapper[4844]: I0126 13:21:56.349278 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="88528049-6527-4f6d-b28f-9a7ca4d46cf8" containerName="kube-state-metrics" containerID="cri-o://3526d27446b4d5bda5b69b0697e58a4e33ba2861c8a717975bb8d0d5d52e0b77" gracePeriod=30 Jan 26 13:21:56 crc kubenswrapper[4844]: I0126 13:21:56.667051 4844 generic.go:334] "Generic (PLEG): container finished" podID="88528049-6527-4f6d-b28f-9a7ca4d46cf8" containerID="3526d27446b4d5bda5b69b0697e58a4e33ba2861c8a717975bb8d0d5d52e0b77" exitCode=2 Jan 26 13:21:56 crc kubenswrapper[4844]: I0126 13:21:56.667124 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"88528049-6527-4f6d-b28f-9a7ca4d46cf8","Type":"ContainerDied","Data":"3526d27446b4d5bda5b69b0697e58a4e33ba2861c8a717975bb8d0d5d52e0b77"} Jan 26 13:21:56 crc kubenswrapper[4844]: I0126 13:21:56.901316 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 13:21:56 crc kubenswrapper[4844]: I0126 13:21:56.993527 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 13:21:56 crc kubenswrapper[4844]: I0126 13:21:56.993570 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.048421 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwhrk\" (UniqueName: \"kubernetes.io/projected/88528049-6527-4f6d-b28f-9a7ca4d46cf8-kube-api-access-hwhrk\") pod \"88528049-6527-4f6d-b28f-9a7ca4d46cf8\" (UID: \"88528049-6527-4f6d-b28f-9a7ca4d46cf8\") " Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.054627 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88528049-6527-4f6d-b28f-9a7ca4d46cf8-kube-api-access-hwhrk" (OuterVolumeSpecName: "kube-api-access-hwhrk") pod "88528049-6527-4f6d-b28f-9a7ca4d46cf8" (UID: "88528049-6527-4f6d-b28f-9a7ca4d46cf8"). InnerVolumeSpecName "kube-api-access-hwhrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.151074 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwhrk\" (UniqueName: \"kubernetes.io/projected/88528049-6527-4f6d-b28f-9a7ca4d46cf8-kube-api-access-hwhrk\") on node \"crc\" DevicePath \"\"" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.313273 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:21:57 crc kubenswrapper[4844]: E0126 13:21:57.313663 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.678209 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"88528049-6527-4f6d-b28f-9a7ca4d46cf8","Type":"ContainerDied","Data":"731d3dbac1606921825c604d4df1600e99857b907e8bd41d74a970c3d2ab4fd8"} Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.678261 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.678551 4844 scope.go:117] "RemoveContainer" containerID="3526d27446b4d5bda5b69b0697e58a4e33ba2861c8a717975bb8d0d5d52e0b77" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.703136 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.715345 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.732551 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 13:21:57 crc kubenswrapper[4844]: E0126 13:21:57.733233 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88528049-6527-4f6d-b28f-9a7ca4d46cf8" containerName="kube-state-metrics" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.733258 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="88528049-6527-4f6d-b28f-9a7ca4d46cf8" containerName="kube-state-metrics" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.733480 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="88528049-6527-4f6d-b28f-9a7ca4d46cf8" containerName="kube-state-metrics" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.734208 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.737815 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.737914 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.744681 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.866558 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fql5w\" (UniqueName: \"kubernetes.io/projected/0887ff47-06ad-4713-8a39-9cf1d0898a8d-kube-api-access-fql5w\") pod \"kube-state-metrics-0\" (UID: \"0887ff47-06ad-4713-8a39-9cf1d0898a8d\") " pod="openstack/kube-state-metrics-0" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.866654 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0887ff47-06ad-4713-8a39-9cf1d0898a8d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0887ff47-06ad-4713-8a39-9cf1d0898a8d\") " pod="openstack/kube-state-metrics-0" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.866745 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0887ff47-06ad-4713-8a39-9cf1d0898a8d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0887ff47-06ad-4713-8a39-9cf1d0898a8d\") " pod="openstack/kube-state-metrics-0" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.866764 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0887ff47-06ad-4713-8a39-9cf1d0898a8d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0887ff47-06ad-4713-8a39-9cf1d0898a8d\") " pod="openstack/kube-state-metrics-0" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.968914 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0887ff47-06ad-4713-8a39-9cf1d0898a8d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0887ff47-06ad-4713-8a39-9cf1d0898a8d\") " pod="openstack/kube-state-metrics-0" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.969068 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0887ff47-06ad-4713-8a39-9cf1d0898a8d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0887ff47-06ad-4713-8a39-9cf1d0898a8d\") " pod="openstack/kube-state-metrics-0" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.969102 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0887ff47-06ad-4713-8a39-9cf1d0898a8d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0887ff47-06ad-4713-8a39-9cf1d0898a8d\") " pod="openstack/kube-state-metrics-0" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.969188 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fql5w\" (UniqueName: \"kubernetes.io/projected/0887ff47-06ad-4713-8a39-9cf1d0898a8d-kube-api-access-fql5w\") pod \"kube-state-metrics-0\" (UID: \"0887ff47-06ad-4713-8a39-9cf1d0898a8d\") " pod="openstack/kube-state-metrics-0" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.974614 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0887ff47-06ad-4713-8a39-9cf1d0898a8d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0887ff47-06ad-4713-8a39-9cf1d0898a8d\") " pod="openstack/kube-state-metrics-0" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.979164 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0887ff47-06ad-4713-8a39-9cf1d0898a8d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0887ff47-06ad-4713-8a39-9cf1d0898a8d\") " pod="openstack/kube-state-metrics-0" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.983539 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0887ff47-06ad-4713-8a39-9cf1d0898a8d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0887ff47-06ad-4713-8a39-9cf1d0898a8d\") " pod="openstack/kube-state-metrics-0" Jan 26 13:21:57 crc kubenswrapper[4844]: I0126 13:21:57.988023 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fql5w\" (UniqueName: \"kubernetes.io/projected/0887ff47-06ad-4713-8a39-9cf1d0898a8d-kube-api-access-fql5w\") pod \"kube-state-metrics-0\" (UID: \"0887ff47-06ad-4713-8a39-9cf1d0898a8d\") " pod="openstack/kube-state-metrics-0" Jan 26 13:21:58 crc kubenswrapper[4844]: I0126 13:21:58.012941 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 13:21:58 crc kubenswrapper[4844]: I0126 13:21:58.013276 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 13:21:58 crc kubenswrapper[4844]: I0126 13:21:58.076080 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 13:21:58 crc kubenswrapper[4844]: I0126 13:21:58.279576 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:21:58 crc kubenswrapper[4844]: I0126 13:21:58.280342 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerName="ceilometer-central-agent" containerID="cri-o://a6117e93c311a91ac0c3f0448577875f8112c1e54362b732040523d2c96c8957" gracePeriod=30 Jan 26 13:21:58 crc kubenswrapper[4844]: I0126 13:21:58.281021 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerName="proxy-httpd" containerID="cri-o://95f2d2c135501c1f665fcb870a8c0fed4f84e5a91728c540bbdbe368f4cfb123" gracePeriod=30 Jan 26 13:21:58 crc kubenswrapper[4844]: I0126 13:21:58.281123 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerName="sg-core" containerID="cri-o://c1ed1eb2958da8b781377498f54742acfbbdca6b168adfbfdebb7008a37f608e" gracePeriod=30 Jan 26 13:21:58 crc kubenswrapper[4844]: I0126 13:21:58.281198 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerName="ceilometer-notification-agent" containerID="cri-o://e740d4612080ca7e7c80b58e745697ad80301ea855a5cc20174740ca8697de92" gracePeriod=30 Jan 26 13:21:58 crc kubenswrapper[4844]: I0126 13:21:58.609387 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 13:21:58 crc kubenswrapper[4844]: I0126 13:21:58.690117 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0887ff47-06ad-4713-8a39-9cf1d0898a8d","Type":"ContainerStarted","Data":"3e73a7aecae128a2846466200250aef9539f026a52b803222b63569f0d287dff"} Jan 26 13:21:58 crc kubenswrapper[4844]: I0126 13:21:58.692133 4844 generic.go:334] "Generic (PLEG): container finished" podID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerID="95f2d2c135501c1f665fcb870a8c0fed4f84e5a91728c540bbdbe368f4cfb123" exitCode=0 Jan 26 13:21:58 crc kubenswrapper[4844]: I0126 13:21:58.692161 4844 generic.go:334] "Generic (PLEG): container finished" podID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerID="c1ed1eb2958da8b781377498f54742acfbbdca6b168adfbfdebb7008a37f608e" exitCode=2 Jan 26 13:21:58 crc kubenswrapper[4844]: I0126 13:21:58.692175 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efd11250-36c0-4291-ae37-a0eff8a1e853","Type":"ContainerDied","Data":"95f2d2c135501c1f665fcb870a8c0fed4f84e5a91728c540bbdbe368f4cfb123"} Jan 26 13:21:58 crc kubenswrapper[4844]: I0126 13:21:58.692196 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efd11250-36c0-4291-ae37-a0eff8a1e853","Type":"ContainerDied","Data":"c1ed1eb2958da8b781377498f54742acfbbdca6b168adfbfdebb7008a37f608e"} Jan 26 13:21:59 crc kubenswrapper[4844]: I0126 13:21:59.015288 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 13:21:59 crc kubenswrapper[4844]: I0126 13:21:59.015330 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 13:21:59 crc kubenswrapper[4844]: I0126 13:21:59.345573 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88528049-6527-4f6d-b28f-9a7ca4d46cf8" path="/var/lib/kubelet/pods/88528049-6527-4f6d-b28f-9a7ca4d46cf8/volumes" Jan 26 13:21:59 crc kubenswrapper[4844]: I0126 13:21:59.704772 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0887ff47-06ad-4713-8a39-9cf1d0898a8d","Type":"ContainerStarted","Data":"0c3b87e02cd580d9234a83cb0c4226c590ae9d22a9f38421a329b96efa3b9022"} Jan 26 13:21:59 crc kubenswrapper[4844]: I0126 13:21:59.705245 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 13:21:59 crc kubenswrapper[4844]: I0126 13:21:59.708581 4844 generic.go:334] "Generic (PLEG): container finished" podID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerID="a6117e93c311a91ac0c3f0448577875f8112c1e54362b732040523d2c96c8957" exitCode=0 Jan 26 13:21:59 crc kubenswrapper[4844]: I0126 13:21:59.708623 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efd11250-36c0-4291-ae37-a0eff8a1e853","Type":"ContainerDied","Data":"a6117e93c311a91ac0c3f0448577875f8112c1e54362b732040523d2c96c8957"} Jan 26 13:21:59 crc kubenswrapper[4844]: I0126 13:21:59.729903 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.24120482 podStartE2EDuration="2.729884924s" podCreationTimestamp="2026-01-26 13:21:57 +0000 UTC" firstStartedPulling="2026-01-26 13:21:58.6207376 +0000 UTC m=+2295.554105222" lastFinishedPulling="2026-01-26 13:21:59.109417714 +0000 UTC m=+2296.042785326" observedRunningTime="2026-01-26 13:21:59.724024912 +0000 UTC m=+2296.657392544" watchObservedRunningTime="2026-01-26 13:21:59.729884924 +0000 UTC m=+2296.663252536" Jan 26 13:22:00 crc kubenswrapper[4844]: I0126 13:22:00.098800 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="75ed6ac2-9ee4-44a1-bef1-0aee2eda244a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 13:22:00 crc kubenswrapper[4844]: I0126 13:22:00.098812 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="75ed6ac2-9ee4-44a1-bef1-0aee2eda244a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 13:22:00 crc kubenswrapper[4844]: I0126 13:22:00.100523 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 13:22:00 crc kubenswrapper[4844]: I0126 13:22:00.129793 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 13:22:00 crc kubenswrapper[4844]: I0126 13:22:00.757903 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 13:22:05 crc kubenswrapper[4844]: I0126 13:22:05.762322 4844 generic.go:334] "Generic (PLEG): container finished" podID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerID="e740d4612080ca7e7c80b58e745697ad80301ea855a5cc20174740ca8697de92" exitCode=0 Jan 26 13:22:05 crc kubenswrapper[4844]: I0126 13:22:05.762417 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efd11250-36c0-4291-ae37-a0eff8a1e853","Type":"ContainerDied","Data":"e740d4612080ca7e7c80b58e745697ad80301ea855a5cc20174740ca8697de92"} Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.584784 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.732373 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-scripts\") pod \"efd11250-36c0-4291-ae37-a0eff8a1e853\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.732421 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efd11250-36c0-4291-ae37-a0eff8a1e853-run-httpd\") pod \"efd11250-36c0-4291-ae37-a0eff8a1e853\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.732504 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-config-data\") pod \"efd11250-36c0-4291-ae37-a0eff8a1e853\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.732551 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-sg-core-conf-yaml\") pod \"efd11250-36c0-4291-ae37-a0eff8a1e853\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.732780 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efd11250-36c0-4291-ae37-a0eff8a1e853-log-httpd\") pod \"efd11250-36c0-4291-ae37-a0eff8a1e853\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.732807 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bntr8\" (UniqueName: \"kubernetes.io/projected/efd11250-36c0-4291-ae37-a0eff8a1e853-kube-api-access-bntr8\") pod \"efd11250-36c0-4291-ae37-a0eff8a1e853\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.732860 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-combined-ca-bundle\") pod \"efd11250-36c0-4291-ae37-a0eff8a1e853\" (UID: \"efd11250-36c0-4291-ae37-a0eff8a1e853\") " Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.733155 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efd11250-36c0-4291-ae37-a0eff8a1e853-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "efd11250-36c0-4291-ae37-a0eff8a1e853" (UID: "efd11250-36c0-4291-ae37-a0eff8a1e853"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.733224 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efd11250-36c0-4291-ae37-a0eff8a1e853-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "efd11250-36c0-4291-ae37-a0eff8a1e853" (UID: "efd11250-36c0-4291-ae37-a0eff8a1e853"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.733659 4844 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efd11250-36c0-4291-ae37-a0eff8a1e853-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.733674 4844 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efd11250-36c0-4291-ae37-a0eff8a1e853-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.738011 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-scripts" (OuterVolumeSpecName: "scripts") pod "efd11250-36c0-4291-ae37-a0eff8a1e853" (UID: "efd11250-36c0-4291-ae37-a0eff8a1e853"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.738263 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efd11250-36c0-4291-ae37-a0eff8a1e853-kube-api-access-bntr8" (OuterVolumeSpecName: "kube-api-access-bntr8") pod "efd11250-36c0-4291-ae37-a0eff8a1e853" (UID: "efd11250-36c0-4291-ae37-a0eff8a1e853"). InnerVolumeSpecName "kube-api-access-bntr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.759979 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "efd11250-36c0-4291-ae37-a0eff8a1e853" (UID: "efd11250-36c0-4291-ae37-a0eff8a1e853"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.781640 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efd11250-36c0-4291-ae37-a0eff8a1e853","Type":"ContainerDied","Data":"088a5596c25c6cc4474a0399fb932a96fcf26df2be300c35b8fcb3bf81c10705"} Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.781697 4844 scope.go:117] "RemoveContainer" containerID="95f2d2c135501c1f665fcb870a8c0fed4f84e5a91728c540bbdbe368f4cfb123" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.781830 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.826243 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efd11250-36c0-4291-ae37-a0eff8a1e853" (UID: "efd11250-36c0-4291-ae37-a0eff8a1e853"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.835511 4844 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.835543 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bntr8\" (UniqueName: \"kubernetes.io/projected/efd11250-36c0-4291-ae37-a0eff8a1e853-kube-api-access-bntr8\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.835559 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.835571 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.861321 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-config-data" (OuterVolumeSpecName: "config-data") pod "efd11250-36c0-4291-ae37-a0eff8a1e853" (UID: "efd11250-36c0-4291-ae37-a0eff8a1e853"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.937692 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd11250-36c0-4291-ae37-a0eff8a1e853-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.941122 4844 scope.go:117] "RemoveContainer" containerID="c1ed1eb2958da8b781377498f54742acfbbdca6b168adfbfdebb7008a37f608e" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.971023 4844 scope.go:117] "RemoveContainer" containerID="e740d4612080ca7e7c80b58e745697ad80301ea855a5cc20174740ca8697de92" Jan 26 13:22:06 crc kubenswrapper[4844]: I0126 13:22:06.997908 4844 scope.go:117] "RemoveContainer" containerID="a6117e93c311a91ac0c3f0448577875f8112c1e54362b732040523d2c96c8957" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:06.999811 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.001060 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.008068 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.119820 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.129762 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.149901 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:22:07 crc kubenswrapper[4844]: E0126 13:22:07.150342 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerName="ceilometer-notification-agent" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.150364 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerName="ceilometer-notification-agent" Jan 26 13:22:07 crc kubenswrapper[4844]: E0126 13:22:07.150379 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerName="ceilometer-central-agent" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.150387 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerName="ceilometer-central-agent" Jan 26 13:22:07 crc kubenswrapper[4844]: E0126 13:22:07.150413 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerName="sg-core" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.150421 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerName="sg-core" Jan 26 13:22:07 crc kubenswrapper[4844]: E0126 13:22:07.150457 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerName="proxy-httpd" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.150465 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerName="proxy-httpd" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.150692 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerName="proxy-httpd" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.150708 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerName="sg-core" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.150719 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerName="ceilometer-notification-agent" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.150739 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd11250-36c0-4291-ae37-a0eff8a1e853" containerName="ceilometer-central-agent" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.152683 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.154553 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.155108 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.158890 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.169175 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.247109 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.247201 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.247227 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c59d00-6a2b-4918-816d-fe693291ff5a-log-httpd\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.247245 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.247313 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-scripts\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.247336 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c59d00-6a2b-4918-816d-fe693291ff5a-run-httpd\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.247368 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-config-data\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.247405 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdhrd\" (UniqueName: \"kubernetes.io/projected/34c59d00-6a2b-4918-816d-fe693291ff5a-kube-api-access-zdhrd\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.350270 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c59d00-6a2b-4918-816d-fe693291ff5a-log-httpd\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.350566 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.351186 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-scripts\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.351231 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c59d00-6a2b-4918-816d-fe693291ff5a-run-httpd\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.351276 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-config-data\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.351322 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdhrd\" (UniqueName: \"kubernetes.io/projected/34c59d00-6a2b-4918-816d-fe693291ff5a-kube-api-access-zdhrd\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.351190 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c59d00-6a2b-4918-816d-fe693291ff5a-log-httpd\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.351434 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efd11250-36c0-4291-ae37-a0eff8a1e853" path="/var/lib/kubelet/pods/efd11250-36c0-4291-ae37-a0eff8a1e853/volumes" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.351980 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c59d00-6a2b-4918-816d-fe693291ff5a-run-httpd\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.352629 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.352954 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.359835 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.359843 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-scripts\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.360078 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-config-data\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.360959 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.363943 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.375935 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdhrd\" (UniqueName: \"kubernetes.io/projected/34c59d00-6a2b-4918-816d-fe693291ff5a-kube-api-access-zdhrd\") pod \"ceilometer-0\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.474864 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:22:07 crc kubenswrapper[4844]: I0126 13:22:07.823220 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.075619 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.083523 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 13:22:08 crc kubenswrapper[4844]: E0126 13:22:08.358755 4844 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0708a00a2fca04829634476eae7bd1965c192cf9d0e80cf8520030526ead93a8/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0708a00a2fca04829634476eae7bd1965c192cf9d0e80cf8520030526ead93a8/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_kube-state-metrics-0_88528049-6527-4f6d-b28f-9a7ca4d46cf8/kube-state-metrics/0.log" to get inode usage: stat /var/log/pods/openstack_kube-state-metrics-0_88528049-6527-4f6d-b28f-9a7ca4d46cf8/kube-state-metrics/0.log: no such file or directory Jan 26 13:22:08 crc kubenswrapper[4844]: E0126 13:22:08.632974 4844 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88528049_6527_4f6d_b28f_9a7ca4d46cf8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefd11250_36c0_4291_ae37_a0eff8a1e853.slice/crio-conmon-95f2d2c135501c1f665fcb870a8c0fed4f84e5a91728c540bbdbe368f4cfb123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefd11250_36c0_4291_ae37_a0eff8a1e853.slice/crio-c1ed1eb2958da8b781377498f54742acfbbdca6b168adfbfdebb7008a37f608e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefd11250_36c0_4291_ae37_a0eff8a1e853.slice/crio-conmon-a6117e93c311a91ac0c3f0448577875f8112c1e54362b732040523d2c96c8957.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefd11250_36c0_4291_ae37_a0eff8a1e853.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5b81065_1990_4734_a78a_3172d68df686.slice/crio-90a9cd29d6650e34a0a0a05b983cee590249abec321098a5864f8b02bde8bc7b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88528049_6527_4f6d_b28f_9a7ca4d46cf8.slice/crio-731d3dbac1606921825c604d4df1600e99857b907e8bd41d74a970c3d2ab4fd8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88528049_6527_4f6d_b28f_9a7ca4d46cf8.slice/crio-3526d27446b4d5bda5b69b0697e58a4e33ba2861c8a717975bb8d0d5d52e0b77.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5b81065_1990_4734_a78a_3172d68df686.slice/crio-conmon-90a9cd29d6650e34a0a0a05b983cee590249abec321098a5864f8b02bde8bc7b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88528049_6527_4f6d_b28f_9a7ca4d46cf8.slice/crio-conmon-3526d27446b4d5bda5b69b0697e58a4e33ba2861c8a717975bb8d0d5d52e0b77.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefd11250_36c0_4291_ae37_a0eff8a1e853.slice/crio-088a5596c25c6cc4474a0399fb932a96fcf26df2be300c35b8fcb3bf81c10705\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0400142_4fe7_4b74_822f_eee67c1bf20b.slice/crio-98d220bfbe1aa32cc412dd383ab85d675f5de3c566ed1251dfd4e4f8a21a1ed1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefd11250_36c0_4291_ae37_a0eff8a1e853.slice/crio-e740d4612080ca7e7c80b58e745697ad80301ea855a5cc20174740ca8697de92.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefd11250_36c0_4291_ae37_a0eff8a1e853.slice/crio-a6117e93c311a91ac0c3f0448577875f8112c1e54362b732040523d2c96c8957.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefd11250_36c0_4291_ae37_a0eff8a1e853.slice/crio-95f2d2c135501c1f665fcb870a8c0fed4f84e5a91728c540bbdbe368f4cfb123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0400142_4fe7_4b74_822f_eee67c1bf20b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefd11250_36c0_4291_ae37_a0eff8a1e853.slice/crio-conmon-c1ed1eb2958da8b781377498f54742acfbbdca6b168adfbfdebb7008a37f608e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefd11250_36c0_4291_ae37_a0eff8a1e853.slice/crio-conmon-e740d4612080ca7e7c80b58e745697ad80301ea855a5cc20174740ca8697de92.scope\": RecentStats: unable to find data in memory cache]" Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.705289 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.785638 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5b81065-1990-4734-a78a-3172d68df686-config-data\") pod \"e5b81065-1990-4734-a78a-3172d68df686\" (UID: \"e5b81065-1990-4734-a78a-3172d68df686\") " Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.785703 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b81065-1990-4734-a78a-3172d68df686-combined-ca-bundle\") pod \"e5b81065-1990-4734-a78a-3172d68df686\" (UID: \"e5b81065-1990-4734-a78a-3172d68df686\") " Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.785832 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtt4l\" (UniqueName: \"kubernetes.io/projected/e5b81065-1990-4734-a78a-3172d68df686-kube-api-access-jtt4l\") pod \"e5b81065-1990-4734-a78a-3172d68df686\" (UID: \"e5b81065-1990-4734-a78a-3172d68df686\") " Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.790117 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5b81065-1990-4734-a78a-3172d68df686-kube-api-access-jtt4l" (OuterVolumeSpecName: "kube-api-access-jtt4l") pod "e5b81065-1990-4734-a78a-3172d68df686" (UID: "e5b81065-1990-4734-a78a-3172d68df686"). InnerVolumeSpecName "kube-api-access-jtt4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.820950 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c59d00-6a2b-4918-816d-fe693291ff5a","Type":"ContainerStarted","Data":"484666d4203a791548b2a17fd63c1fcebe9ea7373262652db327922aa36ec67d"} Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.821003 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c59d00-6a2b-4918-816d-fe693291ff5a","Type":"ContainerStarted","Data":"a70f499be835bd8e0188d80019c9acfd8b493fc9a7f5253dfa0e06f82366ba82"} Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.821014 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c59d00-6a2b-4918-816d-fe693291ff5a","Type":"ContainerStarted","Data":"ec0912d68dbaae1f53e9c0c33e4606cd77d14b0197c05cabee36fff0458873fa"} Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.823157 4844 generic.go:334] "Generic (PLEG): container finished" podID="e5b81065-1990-4734-a78a-3172d68df686" containerID="90a9cd29d6650e34a0a0a05b983cee590249abec321098a5864f8b02bde8bc7b" exitCode=137 Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.823227 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.823335 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e5b81065-1990-4734-a78a-3172d68df686","Type":"ContainerDied","Data":"90a9cd29d6650e34a0a0a05b983cee590249abec321098a5864f8b02bde8bc7b"} Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.823414 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e5b81065-1990-4734-a78a-3172d68df686","Type":"ContainerDied","Data":"3b128a8a973c45d34663f668df6ef664cde0eaa00e0d436a1e78da34bc3502d8"} Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.823452 4844 scope.go:117] "RemoveContainer" containerID="90a9cd29d6650e34a0a0a05b983cee590249abec321098a5864f8b02bde8bc7b" Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.828159 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5b81065-1990-4734-a78a-3172d68df686-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5b81065-1990-4734-a78a-3172d68df686" (UID: "e5b81065-1990-4734-a78a-3172d68df686"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.828510 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5b81065-1990-4734-a78a-3172d68df686-config-data" (OuterVolumeSpecName: "config-data") pod "e5b81065-1990-4734-a78a-3172d68df686" (UID: "e5b81065-1990-4734-a78a-3172d68df686"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.888272 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5b81065-1990-4734-a78a-3172d68df686-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.888300 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b81065-1990-4734-a78a-3172d68df686-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.888311 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtt4l\" (UniqueName: \"kubernetes.io/projected/e5b81065-1990-4734-a78a-3172d68df686-kube-api-access-jtt4l\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.914563 4844 scope.go:117] "RemoveContainer" containerID="90a9cd29d6650e34a0a0a05b983cee590249abec321098a5864f8b02bde8bc7b" Jan 26 13:22:08 crc kubenswrapper[4844]: E0126 13:22:08.916186 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90a9cd29d6650e34a0a0a05b983cee590249abec321098a5864f8b02bde8bc7b\": container with ID starting with 90a9cd29d6650e34a0a0a05b983cee590249abec321098a5864f8b02bde8bc7b not found: ID does not exist" containerID="90a9cd29d6650e34a0a0a05b983cee590249abec321098a5864f8b02bde8bc7b" Jan 26 13:22:08 crc kubenswrapper[4844]: I0126 13:22:08.916227 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90a9cd29d6650e34a0a0a05b983cee590249abec321098a5864f8b02bde8bc7b"} err="failed to get container status \"90a9cd29d6650e34a0a0a05b983cee590249abec321098a5864f8b02bde8bc7b\": rpc error: code = NotFound desc = could not find container \"90a9cd29d6650e34a0a0a05b983cee590249abec321098a5864f8b02bde8bc7b\": container with ID starting with 90a9cd29d6650e34a0a0a05b983cee590249abec321098a5864f8b02bde8bc7b not found: ID does not exist" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.023127 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.024634 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.029978 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.037870 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.157134 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.167417 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.189478 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 13:22:09 crc kubenswrapper[4844]: E0126 13:22:09.191028 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b81065-1990-4734-a78a-3172d68df686" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.191050 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b81065-1990-4734-a78a-3172d68df686" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.191349 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5b81065-1990-4734-a78a-3172d68df686" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.192076 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.202076 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.202497 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.202542 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.204134 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.296912 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjvww\" (UniqueName: \"kubernetes.io/projected/7bcce5df-9655-46fe-8f82-5f226375500f-kube-api-access-hjvww\") pod \"nova-cell1-novncproxy-0\" (UID: \"7bcce5df-9655-46fe-8f82-5f226375500f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.297488 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bcce5df-9655-46fe-8f82-5f226375500f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7bcce5df-9655-46fe-8f82-5f226375500f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.297565 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bcce5df-9655-46fe-8f82-5f226375500f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7bcce5df-9655-46fe-8f82-5f226375500f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.297708 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bcce5df-9655-46fe-8f82-5f226375500f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7bcce5df-9655-46fe-8f82-5f226375500f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.297746 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bcce5df-9655-46fe-8f82-5f226375500f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7bcce5df-9655-46fe-8f82-5f226375500f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.336844 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5b81065-1990-4734-a78a-3172d68df686" path="/var/lib/kubelet/pods/e5b81065-1990-4734-a78a-3172d68df686/volumes" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.401915 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjvww\" (UniqueName: \"kubernetes.io/projected/7bcce5df-9655-46fe-8f82-5f226375500f-kube-api-access-hjvww\") pod \"nova-cell1-novncproxy-0\" (UID: \"7bcce5df-9655-46fe-8f82-5f226375500f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.402088 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bcce5df-9655-46fe-8f82-5f226375500f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7bcce5df-9655-46fe-8f82-5f226375500f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.402138 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bcce5df-9655-46fe-8f82-5f226375500f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7bcce5df-9655-46fe-8f82-5f226375500f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.402182 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bcce5df-9655-46fe-8f82-5f226375500f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7bcce5df-9655-46fe-8f82-5f226375500f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.402205 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bcce5df-9655-46fe-8f82-5f226375500f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7bcce5df-9655-46fe-8f82-5f226375500f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.406692 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bcce5df-9655-46fe-8f82-5f226375500f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7bcce5df-9655-46fe-8f82-5f226375500f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.407243 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bcce5df-9655-46fe-8f82-5f226375500f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7bcce5df-9655-46fe-8f82-5f226375500f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.407541 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bcce5df-9655-46fe-8f82-5f226375500f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7bcce5df-9655-46fe-8f82-5f226375500f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.412203 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bcce5df-9655-46fe-8f82-5f226375500f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7bcce5df-9655-46fe-8f82-5f226375500f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.425207 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjvww\" (UniqueName: \"kubernetes.io/projected/7bcce5df-9655-46fe-8f82-5f226375500f-kube-api-access-hjvww\") pod \"nova-cell1-novncproxy-0\" (UID: \"7bcce5df-9655-46fe-8f82-5f226375500f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.558727 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.836060 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c59d00-6a2b-4918-816d-fe693291ff5a","Type":"ContainerStarted","Data":"b7a181b8769de25e3826ae55d4b59bce4890d5c355c9e00c58a39107063f1488"} Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.836405 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 13:22:09 crc kubenswrapper[4844]: I0126 13:22:09.842973 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.003905 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79cf597b77-57qsp"] Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.006070 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.029801 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.061938 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79cf597b77-57qsp"] Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.121196 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-ovsdbserver-nb\") pod \"dnsmasq-dns-79cf597b77-57qsp\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.121255 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-dns-swift-storage-0\") pod \"dnsmasq-dns-79cf597b77-57qsp\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.121367 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-config\") pod \"dnsmasq-dns-79cf597b77-57qsp\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.121549 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-ovsdbserver-sb\") pod \"dnsmasq-dns-79cf597b77-57qsp\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.121573 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2btq\" (UniqueName: \"kubernetes.io/projected/19f78d57-6253-4a29-8813-9dd30c3a3f86-kube-api-access-h2btq\") pod \"dnsmasq-dns-79cf597b77-57qsp\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.121755 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-dns-svc\") pod \"dnsmasq-dns-79cf597b77-57qsp\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.224781 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-config\") pod \"dnsmasq-dns-79cf597b77-57qsp\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.225117 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-ovsdbserver-sb\") pod \"dnsmasq-dns-79cf597b77-57qsp\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.225223 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2btq\" (UniqueName: \"kubernetes.io/projected/19f78d57-6253-4a29-8813-9dd30c3a3f86-kube-api-access-h2btq\") pod \"dnsmasq-dns-79cf597b77-57qsp\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.225343 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-dns-svc\") pod \"dnsmasq-dns-79cf597b77-57qsp\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.225509 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-ovsdbserver-nb\") pod \"dnsmasq-dns-79cf597b77-57qsp\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.225648 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-dns-swift-storage-0\") pod \"dnsmasq-dns-79cf597b77-57qsp\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.226021 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-config\") pod \"dnsmasq-dns-79cf597b77-57qsp\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.226495 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-dns-swift-storage-0\") pod \"dnsmasq-dns-79cf597b77-57qsp\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.227062 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-ovsdbserver-sb\") pod \"dnsmasq-dns-79cf597b77-57qsp\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.227186 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-dns-svc\") pod \"dnsmasq-dns-79cf597b77-57qsp\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.227758 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-ovsdbserver-nb\") pod \"dnsmasq-dns-79cf597b77-57qsp\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.245075 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2btq\" (UniqueName: \"kubernetes.io/projected/19f78d57-6253-4a29-8813-9dd30c3a3f86-kube-api-access-h2btq\") pod \"dnsmasq-dns-79cf597b77-57qsp\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.516500 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.863293 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7bcce5df-9655-46fe-8f82-5f226375500f","Type":"ContainerStarted","Data":"20ddbd6ae41a7add9d2c536796eaab5f0c1cf9530f65286b5c19469f9401cc9a"} Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.863590 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7bcce5df-9655-46fe-8f82-5f226375500f","Type":"ContainerStarted","Data":"92eefe98cfb0c72848bbb4c21307240cb2804c11fb2bb3db85026b6f3b6bd6aa"} Jan 26 13:22:10 crc kubenswrapper[4844]: I0126 13:22:10.915836 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.9158127409999999 podStartE2EDuration="1.915812741s" podCreationTimestamp="2026-01-26 13:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:22:10.882136807 +0000 UTC m=+2307.815504429" watchObservedRunningTime="2026-01-26 13:22:10.915812741 +0000 UTC m=+2307.849180353" Jan 26 13:22:11 crc kubenswrapper[4844]: I0126 13:22:11.129852 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79cf597b77-57qsp"] Jan 26 13:22:11 crc kubenswrapper[4844]: I0126 13:22:11.874308 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c59d00-6a2b-4918-816d-fe693291ff5a","Type":"ContainerStarted","Data":"582af64c78b2334dc533a1eb53b000832dffdeb4290b11dd3fd311a4edba5aaf"} Jan 26 13:22:11 crc kubenswrapper[4844]: I0126 13:22:11.875767 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 13:22:11 crc kubenswrapper[4844]: I0126 13:22:11.877642 4844 generic.go:334] "Generic (PLEG): container finished" podID="19f78d57-6253-4a29-8813-9dd30c3a3f86" containerID="b1da75ac10c2c9b81a86b96b80aab62885a94c2dbea2251c84f0907b8747f21b" exitCode=0 Jan 26 13:22:11 crc kubenswrapper[4844]: I0126 13:22:11.877730 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cf597b77-57qsp" event={"ID":"19f78d57-6253-4a29-8813-9dd30c3a3f86","Type":"ContainerDied","Data":"b1da75ac10c2c9b81a86b96b80aab62885a94c2dbea2251c84f0907b8747f21b"} Jan 26 13:22:11 crc kubenswrapper[4844]: I0126 13:22:11.877754 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cf597b77-57qsp" event={"ID":"19f78d57-6253-4a29-8813-9dd30c3a3f86","Type":"ContainerStarted","Data":"c7fa3a55a69b80862ab463a9bc2367f217b0bc5c52cf0df6423f6f2144b04365"} Jan 26 13:22:11 crc kubenswrapper[4844]: I0126 13:22:11.914849 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.192236213 podStartE2EDuration="4.914826223s" podCreationTimestamp="2026-01-26 13:22:07 +0000 UTC" firstStartedPulling="2026-01-26 13:22:08.081708506 +0000 UTC m=+2305.015076118" lastFinishedPulling="2026-01-26 13:22:10.804298516 +0000 UTC m=+2307.737666128" observedRunningTime="2026-01-26 13:22:11.902158037 +0000 UTC m=+2308.835525659" watchObservedRunningTime="2026-01-26 13:22:11.914826223 +0000 UTC m=+2308.848193835" Jan 26 13:22:12 crc kubenswrapper[4844]: I0126 13:22:12.314395 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:22:12 crc kubenswrapper[4844]: E0126 13:22:12.315039 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:22:12 crc kubenswrapper[4844]: I0126 13:22:12.912862 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cf597b77-57qsp" event={"ID":"19f78d57-6253-4a29-8813-9dd30c3a3f86","Type":"ContainerStarted","Data":"afd59bf86e1f85a288693413031160fed09539a3d118ca1e9e2aea9af5a44c3e"} Jan 26 13:22:12 crc kubenswrapper[4844]: I0126 13:22:12.913234 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:12 crc kubenswrapper[4844]: I0126 13:22:12.915200 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:22:12 crc kubenswrapper[4844]: I0126 13:22:12.915416 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="75ed6ac2-9ee4-44a1-bef1-0aee2eda244a" containerName="nova-api-log" containerID="cri-o://5734668b21517868750643618ba5f82624f66a29accbe6460a39ce71e8d82fd3" gracePeriod=30 Jan 26 13:22:12 crc kubenswrapper[4844]: I0126 13:22:12.915559 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="75ed6ac2-9ee4-44a1-bef1-0aee2eda244a" containerName="nova-api-api" containerID="cri-o://c7e82dc15a0d63844059c67efc32d078b7e8f4863e63c8006e6e67513233cc19" gracePeriod=30 Jan 26 13:22:12 crc kubenswrapper[4844]: I0126 13:22:12.957958 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79cf597b77-57qsp" podStartSLOduration=3.957937841 podStartE2EDuration="3.957937841s" podCreationTimestamp="2026-01-26 13:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:22:12.944471325 +0000 UTC m=+2309.877838947" watchObservedRunningTime="2026-01-26 13:22:12.957937841 +0000 UTC m=+2309.891305453" Jan 26 13:22:13 crc kubenswrapper[4844]: I0126 13:22:13.576171 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:22:13 crc kubenswrapper[4844]: I0126 13:22:13.923866 4844 generic.go:334] "Generic (PLEG): container finished" podID="75ed6ac2-9ee4-44a1-bef1-0aee2eda244a" containerID="c7e82dc15a0d63844059c67efc32d078b7e8f4863e63c8006e6e67513233cc19" exitCode=0 Jan 26 13:22:13 crc kubenswrapper[4844]: I0126 13:22:13.923904 4844 generic.go:334] "Generic (PLEG): container finished" podID="75ed6ac2-9ee4-44a1-bef1-0aee2eda244a" containerID="5734668b21517868750643618ba5f82624f66a29accbe6460a39ce71e8d82fd3" exitCode=143 Jan 26 13:22:13 crc kubenswrapper[4844]: I0126 13:22:13.923947 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a","Type":"ContainerDied","Data":"c7e82dc15a0d63844059c67efc32d078b7e8f4863e63c8006e6e67513233cc19"} Jan 26 13:22:13 crc kubenswrapper[4844]: I0126 13:22:13.923995 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a","Type":"ContainerDied","Data":"5734668b21517868750643618ba5f82624f66a29accbe6460a39ce71e8d82fd3"} Jan 26 13:22:14 crc kubenswrapper[4844]: I0126 13:22:14.559450 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:14 crc kubenswrapper[4844]: I0126 13:22:14.904251 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:22:14 crc kubenswrapper[4844]: I0126 13:22:14.934516 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a","Type":"ContainerDied","Data":"fbd806fd859d49e3185a5efb4af4defd2d5a3c5524028df6ad0e4102694dea54"} Jan 26 13:22:14 crc kubenswrapper[4844]: I0126 13:22:14.934572 4844 scope.go:117] "RemoveContainer" containerID="c7e82dc15a0d63844059c67efc32d078b7e8f4863e63c8006e6e67513233cc19" Jan 26 13:22:14 crc kubenswrapper[4844]: I0126 13:22:14.934627 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:22:14 crc kubenswrapper[4844]: I0126 13:22:14.934735 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerName="ceilometer-central-agent" containerID="cri-o://a70f499be835bd8e0188d80019c9acfd8b493fc9a7f5253dfa0e06f82366ba82" gracePeriod=30 Jan 26 13:22:14 crc kubenswrapper[4844]: I0126 13:22:14.934776 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerName="sg-core" containerID="cri-o://b7a181b8769de25e3826ae55d4b59bce4890d5c355c9e00c58a39107063f1488" gracePeriod=30 Jan 26 13:22:14 crc kubenswrapper[4844]: I0126 13:22:14.934819 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerName="ceilometer-notification-agent" containerID="cri-o://484666d4203a791548b2a17fd63c1fcebe9ea7373262652db327922aa36ec67d" gracePeriod=30 Jan 26 13:22:14 crc kubenswrapper[4844]: I0126 13:22:14.934806 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerName="proxy-httpd" containerID="cri-o://582af64c78b2334dc533a1eb53b000832dffdeb4290b11dd3fd311a4edba5aaf" gracePeriod=30 Jan 26 13:22:14 crc kubenswrapper[4844]: I0126 13:22:14.981935 4844 scope.go:117] "RemoveContainer" containerID="5734668b21517868750643618ba5f82624f66a29accbe6460a39ce71e8d82fd3" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.037267 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-combined-ca-bundle\") pod \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\" (UID: \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\") " Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.038350 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-config-data\") pod \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\" (UID: \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\") " Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.038440 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ft9b\" (UniqueName: \"kubernetes.io/projected/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-kube-api-access-9ft9b\") pod \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\" (UID: \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\") " Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.038530 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-logs\") pod \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\" (UID: \"75ed6ac2-9ee4-44a1-bef1-0aee2eda244a\") " Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.039890 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-logs" (OuterVolumeSpecName: "logs") pod "75ed6ac2-9ee4-44a1-bef1-0aee2eda244a" (UID: "75ed6ac2-9ee4-44a1-bef1-0aee2eda244a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.044114 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-kube-api-access-9ft9b" (OuterVolumeSpecName: "kube-api-access-9ft9b") pod "75ed6ac2-9ee4-44a1-bef1-0aee2eda244a" (UID: "75ed6ac2-9ee4-44a1-bef1-0aee2eda244a"). InnerVolumeSpecName "kube-api-access-9ft9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.083934 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75ed6ac2-9ee4-44a1-bef1-0aee2eda244a" (UID: "75ed6ac2-9ee4-44a1-bef1-0aee2eda244a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.104035 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-config-data" (OuterVolumeSpecName: "config-data") pod "75ed6ac2-9ee4-44a1-bef1-0aee2eda244a" (UID: "75ed6ac2-9ee4-44a1-bef1-0aee2eda244a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.140757 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ft9b\" (UniqueName: \"kubernetes.io/projected/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-kube-api-access-9ft9b\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.140797 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.140810 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.140822 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.275872 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.290523 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.301632 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 13:22:15 crc kubenswrapper[4844]: E0126 13:22:15.302098 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ed6ac2-9ee4-44a1-bef1-0aee2eda244a" containerName="nova-api-log" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.302122 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ed6ac2-9ee4-44a1-bef1-0aee2eda244a" containerName="nova-api-log" Jan 26 13:22:15 crc kubenswrapper[4844]: E0126 13:22:15.302167 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ed6ac2-9ee4-44a1-bef1-0aee2eda244a" containerName="nova-api-api" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.302176 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ed6ac2-9ee4-44a1-bef1-0aee2eda244a" containerName="nova-api-api" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.302393 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="75ed6ac2-9ee4-44a1-bef1-0aee2eda244a" containerName="nova-api-api" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.302429 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="75ed6ac2-9ee4-44a1-bef1-0aee2eda244a" containerName="nova-api-log" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.303901 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.306970 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.307245 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.307351 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.334719 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75ed6ac2-9ee4-44a1-bef1-0aee2eda244a" path="/var/lib/kubelet/pods/75ed6ac2-9ee4-44a1-bef1-0aee2eda244a/volumes" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.335495 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.344832 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-public-tls-certs\") pod \"nova-api-0\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.344866 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.344921 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.344957 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-config-data\") pod \"nova-api-0\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.344999 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg7zr\" (UniqueName: \"kubernetes.io/projected/ee748334-3a17-43d6-92e0-335a6dcfe622-kube-api-access-gg7zr\") pod \"nova-api-0\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.345022 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee748334-3a17-43d6-92e0-335a6dcfe622-logs\") pod \"nova-api-0\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.449848 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-config-data\") pod \"nova-api-0\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.449937 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg7zr\" (UniqueName: \"kubernetes.io/projected/ee748334-3a17-43d6-92e0-335a6dcfe622-kube-api-access-gg7zr\") pod \"nova-api-0\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.449968 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee748334-3a17-43d6-92e0-335a6dcfe622-logs\") pod \"nova-api-0\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.450068 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-public-tls-certs\") pod \"nova-api-0\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.450091 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.450139 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.451863 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee748334-3a17-43d6-92e0-335a6dcfe622-logs\") pod \"nova-api-0\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.453557 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.455614 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-config-data\") pod \"nova-api-0\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.456834 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-public-tls-certs\") pod \"nova-api-0\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.457920 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.470380 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg7zr\" (UniqueName: \"kubernetes.io/projected/ee748334-3a17-43d6-92e0-335a6dcfe622-kube-api-access-gg7zr\") pod \"nova-api-0\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.635650 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.976608 4844 generic.go:334] "Generic (PLEG): container finished" podID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerID="582af64c78b2334dc533a1eb53b000832dffdeb4290b11dd3fd311a4edba5aaf" exitCode=0 Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.976849 4844 generic.go:334] "Generic (PLEG): container finished" podID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerID="b7a181b8769de25e3826ae55d4b59bce4890d5c355c9e00c58a39107063f1488" exitCode=2 Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.976859 4844 generic.go:334] "Generic (PLEG): container finished" podID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerID="484666d4203a791548b2a17fd63c1fcebe9ea7373262652db327922aa36ec67d" exitCode=0 Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.976795 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c59d00-6a2b-4918-816d-fe693291ff5a","Type":"ContainerDied","Data":"582af64c78b2334dc533a1eb53b000832dffdeb4290b11dd3fd311a4edba5aaf"} Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.976897 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c59d00-6a2b-4918-816d-fe693291ff5a","Type":"ContainerDied","Data":"b7a181b8769de25e3826ae55d4b59bce4890d5c355c9e00c58a39107063f1488"} Jan 26 13:22:15 crc kubenswrapper[4844]: I0126 13:22:15.976910 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c59d00-6a2b-4918-816d-fe693291ff5a","Type":"ContainerDied","Data":"484666d4203a791548b2a17fd63c1fcebe9ea7373262652db327922aa36ec67d"} Jan 26 13:22:16 crc kubenswrapper[4844]: W0126 13:22:16.105923 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee748334_3a17_43d6_92e0_335a6dcfe622.slice/crio-9379d3bdc358153aa220668e4d425b9778ce73a1584a184b5420357a7eb78d72 WatchSource:0}: Error finding container 9379d3bdc358153aa220668e4d425b9778ce73a1584a184b5420357a7eb78d72: Status 404 returned error can't find the container with id 9379d3bdc358153aa220668e4d425b9778ce73a1584a184b5420357a7eb78d72 Jan 26 13:22:16 crc kubenswrapper[4844]: I0126 13:22:16.130507 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:22:16 crc kubenswrapper[4844]: I0126 13:22:16.988650 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee748334-3a17-43d6-92e0-335a6dcfe622","Type":"ContainerStarted","Data":"6a4906bb9ce46374379601c27310e666259c97c1de9b206368882a3a7d8f8fd7"} Jan 26 13:22:16 crc kubenswrapper[4844]: I0126 13:22:16.988995 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee748334-3a17-43d6-92e0-335a6dcfe622","Type":"ContainerStarted","Data":"9379d3bdc358153aa220668e4d425b9778ce73a1584a184b5420357a7eb78d72"} Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.002294 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee748334-3a17-43d6-92e0-335a6dcfe622","Type":"ContainerStarted","Data":"6c2e5a06dda62cfede5da7d482fb08aa84e625990dd741e10a54919ef5000e78"} Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.029314 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.029297904 podStartE2EDuration="3.029297904s" podCreationTimestamp="2026-01-26 13:22:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:22:18.024411585 +0000 UTC m=+2314.957779197" watchObservedRunningTime="2026-01-26 13:22:18.029297904 +0000 UTC m=+2314.962665516" Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.551745 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.605495 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-sg-core-conf-yaml\") pod \"34c59d00-6a2b-4918-816d-fe693291ff5a\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.605554 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-ceilometer-tls-certs\") pod \"34c59d00-6a2b-4918-816d-fe693291ff5a\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.605690 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-config-data\") pod \"34c59d00-6a2b-4918-816d-fe693291ff5a\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.605800 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-scripts\") pod \"34c59d00-6a2b-4918-816d-fe693291ff5a\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.605867 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c59d00-6a2b-4918-816d-fe693291ff5a-run-httpd\") pod \"34c59d00-6a2b-4918-816d-fe693291ff5a\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.605947 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-combined-ca-bundle\") pod \"34c59d00-6a2b-4918-816d-fe693291ff5a\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.606002 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdhrd\" (UniqueName: \"kubernetes.io/projected/34c59d00-6a2b-4918-816d-fe693291ff5a-kube-api-access-zdhrd\") pod \"34c59d00-6a2b-4918-816d-fe693291ff5a\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.606032 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c59d00-6a2b-4918-816d-fe693291ff5a-log-httpd\") pod \"34c59d00-6a2b-4918-816d-fe693291ff5a\" (UID: \"34c59d00-6a2b-4918-816d-fe693291ff5a\") " Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.607058 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34c59d00-6a2b-4918-816d-fe693291ff5a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "34c59d00-6a2b-4918-816d-fe693291ff5a" (UID: "34c59d00-6a2b-4918-816d-fe693291ff5a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.617412 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34c59d00-6a2b-4918-816d-fe693291ff5a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "34c59d00-6a2b-4918-816d-fe693291ff5a" (UID: "34c59d00-6a2b-4918-816d-fe693291ff5a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.635188 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34c59d00-6a2b-4918-816d-fe693291ff5a-kube-api-access-zdhrd" (OuterVolumeSpecName: "kube-api-access-zdhrd") pod "34c59d00-6a2b-4918-816d-fe693291ff5a" (UID: "34c59d00-6a2b-4918-816d-fe693291ff5a"). InnerVolumeSpecName "kube-api-access-zdhrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.654344 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-scripts" (OuterVolumeSpecName: "scripts") pod "34c59d00-6a2b-4918-816d-fe693291ff5a" (UID: "34c59d00-6a2b-4918-816d-fe693291ff5a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.668070 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "34c59d00-6a2b-4918-816d-fe693291ff5a" (UID: "34c59d00-6a2b-4918-816d-fe693291ff5a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.707898 4844 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.707927 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.707936 4844 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c59d00-6a2b-4918-816d-fe693291ff5a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.707945 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdhrd\" (UniqueName: \"kubernetes.io/projected/34c59d00-6a2b-4918-816d-fe693291ff5a-kube-api-access-zdhrd\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.707955 4844 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c59d00-6a2b-4918-816d-fe693291ff5a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.708341 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "34c59d00-6a2b-4918-816d-fe693291ff5a" (UID: "34c59d00-6a2b-4918-816d-fe693291ff5a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.734042 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34c59d00-6a2b-4918-816d-fe693291ff5a" (UID: "34c59d00-6a2b-4918-816d-fe693291ff5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.747305 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-config-data" (OuterVolumeSpecName: "config-data") pod "34c59d00-6a2b-4918-816d-fe693291ff5a" (UID: "34c59d00-6a2b-4918-816d-fe693291ff5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.810128 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.810169 4844 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:18 crc kubenswrapper[4844]: I0126 13:22:18.810182 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c59d00-6a2b-4918-816d-fe693291ff5a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.017198 4844 generic.go:334] "Generic (PLEG): container finished" podID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerID="a70f499be835bd8e0188d80019c9acfd8b493fc9a7f5253dfa0e06f82366ba82" exitCode=0 Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.017641 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.018304 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c59d00-6a2b-4918-816d-fe693291ff5a","Type":"ContainerDied","Data":"a70f499be835bd8e0188d80019c9acfd8b493fc9a7f5253dfa0e06f82366ba82"} Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.018338 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c59d00-6a2b-4918-816d-fe693291ff5a","Type":"ContainerDied","Data":"ec0912d68dbaae1f53e9c0c33e4606cd77d14b0197c05cabee36fff0458873fa"} Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.018358 4844 scope.go:117] "RemoveContainer" containerID="582af64c78b2334dc533a1eb53b000832dffdeb4290b11dd3fd311a4edba5aaf" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.049112 4844 scope.go:117] "RemoveContainer" containerID="b7a181b8769de25e3826ae55d4b59bce4890d5c355c9e00c58a39107063f1488" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.077242 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.087099 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.108652 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:22:19 crc kubenswrapper[4844]: E0126 13:22:19.109542 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerName="proxy-httpd" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.109561 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerName="proxy-httpd" Jan 26 13:22:19 crc kubenswrapper[4844]: E0126 13:22:19.109576 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerName="ceilometer-notification-agent" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.109582 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerName="ceilometer-notification-agent" Jan 26 13:22:19 crc kubenswrapper[4844]: E0126 13:22:19.109620 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerName="sg-core" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.109629 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerName="sg-core" Jan 26 13:22:19 crc kubenswrapper[4844]: E0126 13:22:19.109650 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerName="ceilometer-central-agent" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.109658 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerName="ceilometer-central-agent" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.110299 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerName="ceilometer-central-agent" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.110324 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerName="ceilometer-notification-agent" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.110469 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerName="proxy-httpd" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.110478 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c59d00-6a2b-4918-816d-fe693291ff5a" containerName="sg-core" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.110518 4844 scope.go:117] "RemoveContainer" containerID="484666d4203a791548b2a17fd63c1fcebe9ea7373262652db327922aa36ec67d" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.112094 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.114578 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.114864 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.114983 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.150093 4844 scope.go:117] "RemoveContainer" containerID="a70f499be835bd8e0188d80019c9acfd8b493fc9a7f5253dfa0e06f82366ba82" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.172508 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.207064 4844 scope.go:117] "RemoveContainer" containerID="582af64c78b2334dc533a1eb53b000832dffdeb4290b11dd3fd311a4edba5aaf" Jan 26 13:22:19 crc kubenswrapper[4844]: E0126 13:22:19.207489 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"582af64c78b2334dc533a1eb53b000832dffdeb4290b11dd3fd311a4edba5aaf\": container with ID starting with 582af64c78b2334dc533a1eb53b000832dffdeb4290b11dd3fd311a4edba5aaf not found: ID does not exist" containerID="582af64c78b2334dc533a1eb53b000832dffdeb4290b11dd3fd311a4edba5aaf" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.207534 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"582af64c78b2334dc533a1eb53b000832dffdeb4290b11dd3fd311a4edba5aaf"} err="failed to get container status \"582af64c78b2334dc533a1eb53b000832dffdeb4290b11dd3fd311a4edba5aaf\": rpc error: code = NotFound desc = could not find container \"582af64c78b2334dc533a1eb53b000832dffdeb4290b11dd3fd311a4edba5aaf\": container with ID starting with 582af64c78b2334dc533a1eb53b000832dffdeb4290b11dd3fd311a4edba5aaf not found: ID does not exist" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.207564 4844 scope.go:117] "RemoveContainer" containerID="b7a181b8769de25e3826ae55d4b59bce4890d5c355c9e00c58a39107063f1488" Jan 26 13:22:19 crc kubenswrapper[4844]: E0126 13:22:19.208443 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7a181b8769de25e3826ae55d4b59bce4890d5c355c9e00c58a39107063f1488\": container with ID starting with b7a181b8769de25e3826ae55d4b59bce4890d5c355c9e00c58a39107063f1488 not found: ID does not exist" containerID="b7a181b8769de25e3826ae55d4b59bce4890d5c355c9e00c58a39107063f1488" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.208481 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7a181b8769de25e3826ae55d4b59bce4890d5c355c9e00c58a39107063f1488"} err="failed to get container status \"b7a181b8769de25e3826ae55d4b59bce4890d5c355c9e00c58a39107063f1488\": rpc error: code = NotFound desc = could not find container \"b7a181b8769de25e3826ae55d4b59bce4890d5c355c9e00c58a39107063f1488\": container with ID starting with b7a181b8769de25e3826ae55d4b59bce4890d5c355c9e00c58a39107063f1488 not found: ID does not exist" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.208504 4844 scope.go:117] "RemoveContainer" containerID="484666d4203a791548b2a17fd63c1fcebe9ea7373262652db327922aa36ec67d" Jan 26 13:22:19 crc kubenswrapper[4844]: E0126 13:22:19.209213 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"484666d4203a791548b2a17fd63c1fcebe9ea7373262652db327922aa36ec67d\": container with ID starting with 484666d4203a791548b2a17fd63c1fcebe9ea7373262652db327922aa36ec67d not found: ID does not exist" containerID="484666d4203a791548b2a17fd63c1fcebe9ea7373262652db327922aa36ec67d" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.209262 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"484666d4203a791548b2a17fd63c1fcebe9ea7373262652db327922aa36ec67d"} err="failed to get container status \"484666d4203a791548b2a17fd63c1fcebe9ea7373262652db327922aa36ec67d\": rpc error: code = NotFound desc = could not find container \"484666d4203a791548b2a17fd63c1fcebe9ea7373262652db327922aa36ec67d\": container with ID starting with 484666d4203a791548b2a17fd63c1fcebe9ea7373262652db327922aa36ec67d not found: ID does not exist" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.209289 4844 scope.go:117] "RemoveContainer" containerID="a70f499be835bd8e0188d80019c9acfd8b493fc9a7f5253dfa0e06f82366ba82" Jan 26 13:22:19 crc kubenswrapper[4844]: E0126 13:22:19.209642 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a70f499be835bd8e0188d80019c9acfd8b493fc9a7f5253dfa0e06f82366ba82\": container with ID starting with a70f499be835bd8e0188d80019c9acfd8b493fc9a7f5253dfa0e06f82366ba82 not found: ID does not exist" containerID="a70f499be835bd8e0188d80019c9acfd8b493fc9a7f5253dfa0e06f82366ba82" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.209662 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a70f499be835bd8e0188d80019c9acfd8b493fc9a7f5253dfa0e06f82366ba82"} err="failed to get container status \"a70f499be835bd8e0188d80019c9acfd8b493fc9a7f5253dfa0e06f82366ba82\": rpc error: code = NotFound desc = could not find container \"a70f499be835bd8e0188d80019c9acfd8b493fc9a7f5253dfa0e06f82366ba82\": container with ID starting with a70f499be835bd8e0188d80019c9acfd8b493fc9a7f5253dfa0e06f82366ba82 not found: ID does not exist" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.218165 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j69q\" (UniqueName: \"kubernetes.io/projected/fb03b4d3-5582-4758-a585-5f8e82a306da-kube-api-access-9j69q\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.218205 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb03b4d3-5582-4758-a585-5f8e82a306da-scripts\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.218257 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb03b4d3-5582-4758-a585-5f8e82a306da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.218382 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb03b4d3-5582-4758-a585-5f8e82a306da-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.218578 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb03b4d3-5582-4758-a585-5f8e82a306da-config-data\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.218637 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb03b4d3-5582-4758-a585-5f8e82a306da-run-httpd\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.218743 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb03b4d3-5582-4758-a585-5f8e82a306da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.218900 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb03b4d3-5582-4758-a585-5f8e82a306da-log-httpd\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.320188 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb03b4d3-5582-4758-a585-5f8e82a306da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.320969 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb03b4d3-5582-4758-a585-5f8e82a306da-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.321041 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb03b4d3-5582-4758-a585-5f8e82a306da-config-data\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.321073 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb03b4d3-5582-4758-a585-5f8e82a306da-run-httpd\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.321127 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb03b4d3-5582-4758-a585-5f8e82a306da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.321199 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb03b4d3-5582-4758-a585-5f8e82a306da-log-httpd\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.321250 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j69q\" (UniqueName: \"kubernetes.io/projected/fb03b4d3-5582-4758-a585-5f8e82a306da-kube-api-access-9j69q\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.321280 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb03b4d3-5582-4758-a585-5f8e82a306da-scripts\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.321478 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb03b4d3-5582-4758-a585-5f8e82a306da-run-httpd\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.321769 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb03b4d3-5582-4758-a585-5f8e82a306da-log-httpd\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.326231 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb03b4d3-5582-4758-a585-5f8e82a306da-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.326340 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb03b4d3-5582-4758-a585-5f8e82a306da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.326893 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb03b4d3-5582-4758-a585-5f8e82a306da-config-data\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.327579 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb03b4d3-5582-4758-a585-5f8e82a306da-scripts\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.327960 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb03b4d3-5582-4758-a585-5f8e82a306da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.328548 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34c59d00-6a2b-4918-816d-fe693291ff5a" path="/var/lib/kubelet/pods/34c59d00-6a2b-4918-816d-fe693291ff5a/volumes" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.337678 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j69q\" (UniqueName: \"kubernetes.io/projected/fb03b4d3-5582-4758-a585-5f8e82a306da-kube-api-access-9j69q\") pod \"ceilometer-0\" (UID: \"fb03b4d3-5582-4758-a585-5f8e82a306da\") " pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.437607 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.562353 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.587334 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:19 crc kubenswrapper[4844]: I0126 13:22:19.898146 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 13:22:19 crc kubenswrapper[4844]: W0126 13:22:19.901858 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb03b4d3_5582_4758_a585_5f8e82a306da.slice/crio-d8e8dfac8b1cfffed4268c5480c43e2bf6215df606a8eb7ee1a1ec22708a0a76 WatchSource:0}: Error finding container d8e8dfac8b1cfffed4268c5480c43e2bf6215df606a8eb7ee1a1ec22708a0a76: Status 404 returned error can't find the container with id d8e8dfac8b1cfffed4268c5480c43e2bf6215df606a8eb7ee1a1ec22708a0a76 Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.033851 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb03b4d3-5582-4758-a585-5f8e82a306da","Type":"ContainerStarted","Data":"d8e8dfac8b1cfffed4268c5480c43e2bf6215df606a8eb7ee1a1ec22708a0a76"} Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.069400 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.246848 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-qqmng"] Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.248448 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qqmng" Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.251489 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.259257 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.276308 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qqmng"] Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.345617 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fthwb\" (UniqueName: \"kubernetes.io/projected/9fddb4ee-fddd-45f3-bc91-21073647af94-kube-api-access-fthwb\") pod \"nova-cell1-cell-mapping-qqmng\" (UID: \"9fddb4ee-fddd-45f3-bc91-21073647af94\") " pod="openstack/nova-cell1-cell-mapping-qqmng" Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.345710 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fddb4ee-fddd-45f3-bc91-21073647af94-scripts\") pod \"nova-cell1-cell-mapping-qqmng\" (UID: \"9fddb4ee-fddd-45f3-bc91-21073647af94\") " pod="openstack/nova-cell1-cell-mapping-qqmng" Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.345757 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fddb4ee-fddd-45f3-bc91-21073647af94-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qqmng\" (UID: \"9fddb4ee-fddd-45f3-bc91-21073647af94\") " pod="openstack/nova-cell1-cell-mapping-qqmng" Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.345834 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fddb4ee-fddd-45f3-bc91-21073647af94-config-data\") pod \"nova-cell1-cell-mapping-qqmng\" (UID: \"9fddb4ee-fddd-45f3-bc91-21073647af94\") " pod="openstack/nova-cell1-cell-mapping-qqmng" Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.447672 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fddb4ee-fddd-45f3-bc91-21073647af94-config-data\") pod \"nova-cell1-cell-mapping-qqmng\" (UID: \"9fddb4ee-fddd-45f3-bc91-21073647af94\") " pod="openstack/nova-cell1-cell-mapping-qqmng" Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.448068 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fthwb\" (UniqueName: \"kubernetes.io/projected/9fddb4ee-fddd-45f3-bc91-21073647af94-kube-api-access-fthwb\") pod \"nova-cell1-cell-mapping-qqmng\" (UID: \"9fddb4ee-fddd-45f3-bc91-21073647af94\") " pod="openstack/nova-cell1-cell-mapping-qqmng" Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.448201 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fddb4ee-fddd-45f3-bc91-21073647af94-scripts\") pod \"nova-cell1-cell-mapping-qqmng\" (UID: \"9fddb4ee-fddd-45f3-bc91-21073647af94\") " pod="openstack/nova-cell1-cell-mapping-qqmng" Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.448319 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fddb4ee-fddd-45f3-bc91-21073647af94-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qqmng\" (UID: \"9fddb4ee-fddd-45f3-bc91-21073647af94\") " pod="openstack/nova-cell1-cell-mapping-qqmng" Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.455050 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fddb4ee-fddd-45f3-bc91-21073647af94-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qqmng\" (UID: \"9fddb4ee-fddd-45f3-bc91-21073647af94\") " pod="openstack/nova-cell1-cell-mapping-qqmng" Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.455244 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fddb4ee-fddd-45f3-bc91-21073647af94-config-data\") pod \"nova-cell1-cell-mapping-qqmng\" (UID: \"9fddb4ee-fddd-45f3-bc91-21073647af94\") " pod="openstack/nova-cell1-cell-mapping-qqmng" Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.460939 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fddb4ee-fddd-45f3-bc91-21073647af94-scripts\") pod \"nova-cell1-cell-mapping-qqmng\" (UID: \"9fddb4ee-fddd-45f3-bc91-21073647af94\") " pod="openstack/nova-cell1-cell-mapping-qqmng" Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.468274 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fthwb\" (UniqueName: \"kubernetes.io/projected/9fddb4ee-fddd-45f3-bc91-21073647af94-kube-api-access-fthwb\") pod \"nova-cell1-cell-mapping-qqmng\" (UID: \"9fddb4ee-fddd-45f3-bc91-21073647af94\") " pod="openstack/nova-cell1-cell-mapping-qqmng" Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.518111 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.579527 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qqmng" Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.597776 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-684f48dcbc-vswkx"] Jan 26 13:22:20 crc kubenswrapper[4844]: I0126 13:22:20.597994 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" podUID="68596a47-7ecd-431f-8b10-00479d94c556" containerName="dnsmasq-dns" containerID="cri-o://e89fe742b487f51f0f90761df3dd503c98581b86c48629f7dc8cfb9d69d5a120" gracePeriod=10 Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.067791 4844 generic.go:334] "Generic (PLEG): container finished" podID="68596a47-7ecd-431f-8b10-00479d94c556" containerID="e89fe742b487f51f0f90761df3dd503c98581b86c48629f7dc8cfb9d69d5a120" exitCode=0 Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.069810 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" event={"ID":"68596a47-7ecd-431f-8b10-00479d94c556","Type":"ContainerDied","Data":"e89fe742b487f51f0f90761df3dd503c98581b86c48629f7dc8cfb9d69d5a120"} Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.097435 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.162354 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-dns-swift-storage-0\") pod \"68596a47-7ecd-431f-8b10-00479d94c556\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.162810 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-ovsdbserver-nb\") pod \"68596a47-7ecd-431f-8b10-00479d94c556\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.163347 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgkzm\" (UniqueName: \"kubernetes.io/projected/68596a47-7ecd-431f-8b10-00479d94c556-kube-api-access-jgkzm\") pod \"68596a47-7ecd-431f-8b10-00479d94c556\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.163523 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-config\") pod \"68596a47-7ecd-431f-8b10-00479d94c556\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.163624 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-dns-svc\") pod \"68596a47-7ecd-431f-8b10-00479d94c556\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.163734 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-ovsdbserver-sb\") pod \"68596a47-7ecd-431f-8b10-00479d94c556\" (UID: \"68596a47-7ecd-431f-8b10-00479d94c556\") " Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.173468 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68596a47-7ecd-431f-8b10-00479d94c556-kube-api-access-jgkzm" (OuterVolumeSpecName: "kube-api-access-jgkzm") pod "68596a47-7ecd-431f-8b10-00479d94c556" (UID: "68596a47-7ecd-431f-8b10-00479d94c556"). InnerVolumeSpecName "kube-api-access-jgkzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.232671 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-config" (OuterVolumeSpecName: "config") pod "68596a47-7ecd-431f-8b10-00479d94c556" (UID: "68596a47-7ecd-431f-8b10-00479d94c556"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.233797 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "68596a47-7ecd-431f-8b10-00479d94c556" (UID: "68596a47-7ecd-431f-8b10-00479d94c556"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.267007 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "68596a47-7ecd-431f-8b10-00479d94c556" (UID: "68596a47-7ecd-431f-8b10-00479d94c556"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.291153 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "68596a47-7ecd-431f-8b10-00479d94c556" (UID: "68596a47-7ecd-431f-8b10-00479d94c556"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.294049 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "68596a47-7ecd-431f-8b10-00479d94c556" (UID: "68596a47-7ecd-431f-8b10-00479d94c556"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.345368 4844 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.345398 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.345410 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgkzm\" (UniqueName: \"kubernetes.io/projected/68596a47-7ecd-431f-8b10-00479d94c556-kube-api-access-jgkzm\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.345419 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.345429 4844 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.345437 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68596a47-7ecd-431f-8b10-00479d94c556-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:21 crc kubenswrapper[4844]: W0126 13:22:21.393150 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fddb4ee_fddd_45f3_bc91_21073647af94.slice/crio-ec4e6fd6919d77db463941d6e3894dbb50de5fa1b6fc3201c4c4ccac798435bd WatchSource:0}: Error finding container ec4e6fd6919d77db463941d6e3894dbb50de5fa1b6fc3201c4c4ccac798435bd: Status 404 returned error can't find the container with id ec4e6fd6919d77db463941d6e3894dbb50de5fa1b6fc3201c4c4ccac798435bd Jan 26 13:22:21 crc kubenswrapper[4844]: I0126 13:22:21.395630 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qqmng"] Jan 26 13:22:22 crc kubenswrapper[4844]: I0126 13:22:22.095328 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" event={"ID":"68596a47-7ecd-431f-8b10-00479d94c556","Type":"ContainerDied","Data":"a6fd55f9ce401591826a85d47fe23ce3964e4b53cff0c5fc83fe7c4a3ca7bb8f"} Jan 26 13:22:22 crc kubenswrapper[4844]: I0126 13:22:22.095973 4844 scope.go:117] "RemoveContainer" containerID="e89fe742b487f51f0f90761df3dd503c98581b86c48629f7dc8cfb9d69d5a120" Jan 26 13:22:22 crc kubenswrapper[4844]: I0126 13:22:22.095810 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-684f48dcbc-vswkx" Jan 26 13:22:22 crc kubenswrapper[4844]: I0126 13:22:22.109433 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qqmng" event={"ID":"9fddb4ee-fddd-45f3-bc91-21073647af94","Type":"ContainerStarted","Data":"ad802f8ed2a654a2cd9bad0b9806289567cc77e1509066e980825a5b53f5aa16"} Jan 26 13:22:22 crc kubenswrapper[4844]: I0126 13:22:22.109471 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qqmng" event={"ID":"9fddb4ee-fddd-45f3-bc91-21073647af94","Type":"ContainerStarted","Data":"ec4e6fd6919d77db463941d6e3894dbb50de5fa1b6fc3201c4c4ccac798435bd"} Jan 26 13:22:22 crc kubenswrapper[4844]: I0126 13:22:22.123727 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb03b4d3-5582-4758-a585-5f8e82a306da","Type":"ContainerStarted","Data":"4f460974b238a7c8f0ffc8cb5c4e84b1f26b4b0e08caa68064b9dbebee8b681d"} Jan 26 13:22:22 crc kubenswrapper[4844]: I0126 13:22:22.123772 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb03b4d3-5582-4758-a585-5f8e82a306da","Type":"ContainerStarted","Data":"2cbb77ea195f47ccd26430dc21a359a0c690db6e137271c0eeff196206df6c3e"} Jan 26 13:22:22 crc kubenswrapper[4844]: I0126 13:22:22.148200 4844 scope.go:117] "RemoveContainer" containerID="8989d1a8c08c45a13da524c4e8685da0dfe1021baff58f5dad14f6b102d6f6e8" Jan 26 13:22:22 crc kubenswrapper[4844]: I0126 13:22:22.159094 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-qqmng" podStartSLOduration=2.159073254 podStartE2EDuration="2.159073254s" podCreationTimestamp="2026-01-26 13:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:22:22.14526595 +0000 UTC m=+2319.078633552" watchObservedRunningTime="2026-01-26 13:22:22.159073254 +0000 UTC m=+2319.092440866" Jan 26 13:22:22 crc kubenswrapper[4844]: I0126 13:22:22.178650 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-684f48dcbc-vswkx"] Jan 26 13:22:22 crc kubenswrapper[4844]: I0126 13:22:22.182093 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-684f48dcbc-vswkx"] Jan 26 13:22:23 crc kubenswrapper[4844]: I0126 13:22:23.135042 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb03b4d3-5582-4758-a585-5f8e82a306da","Type":"ContainerStarted","Data":"a43fb5ab56a3ed75c8dc9374ce1128afeb2cdcfe58d1ce0fa435af9c99096567"} Jan 26 13:22:23 crc kubenswrapper[4844]: I0126 13:22:23.326878 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68596a47-7ecd-431f-8b10-00479d94c556" path="/var/lib/kubelet/pods/68596a47-7ecd-431f-8b10-00479d94c556/volumes" Jan 26 13:22:25 crc kubenswrapper[4844]: I0126 13:22:25.161449 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb03b4d3-5582-4758-a585-5f8e82a306da","Type":"ContainerStarted","Data":"92ffcdf9e1e6a136f57b2d6e8006714d341631b142ac62d12b2c01bc55b0fab3"} Jan 26 13:22:25 crc kubenswrapper[4844]: I0126 13:22:25.163260 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 13:22:25 crc kubenswrapper[4844]: I0126 13:22:25.194211 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.476584609 podStartE2EDuration="6.194191341s" podCreationTimestamp="2026-01-26 13:22:19 +0000 UTC" firstStartedPulling="2026-01-26 13:22:19.904868417 +0000 UTC m=+2316.838236029" lastFinishedPulling="2026-01-26 13:22:24.622475149 +0000 UTC m=+2321.555842761" observedRunningTime="2026-01-26 13:22:25.185674305 +0000 UTC m=+2322.119041927" watchObservedRunningTime="2026-01-26 13:22:25.194191341 +0000 UTC m=+2322.127558953" Jan 26 13:22:25 crc kubenswrapper[4844]: I0126 13:22:25.313151 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:22:25 crc kubenswrapper[4844]: E0126 13:22:25.313412 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:22:25 crc kubenswrapper[4844]: I0126 13:22:25.636136 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 13:22:25 crc kubenswrapper[4844]: I0126 13:22:25.636437 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 13:22:26 crc kubenswrapper[4844]: I0126 13:22:26.650807 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ee748334-3a17-43d6-92e0-335a6dcfe622" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.222:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 13:22:26 crc kubenswrapper[4844]: I0126 13:22:26.650843 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ee748334-3a17-43d6-92e0-335a6dcfe622" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.222:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 13:22:28 crc kubenswrapper[4844]: I0126 13:22:28.188814 4844 generic.go:334] "Generic (PLEG): container finished" podID="9fddb4ee-fddd-45f3-bc91-21073647af94" containerID="ad802f8ed2a654a2cd9bad0b9806289567cc77e1509066e980825a5b53f5aa16" exitCode=0 Jan 26 13:22:28 crc kubenswrapper[4844]: I0126 13:22:28.188901 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qqmng" event={"ID":"9fddb4ee-fddd-45f3-bc91-21073647af94","Type":"ContainerDied","Data":"ad802f8ed2a654a2cd9bad0b9806289567cc77e1509066e980825a5b53f5aa16"} Jan 26 13:22:29 crc kubenswrapper[4844]: I0126 13:22:29.603047 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qqmng" Jan 26 13:22:29 crc kubenswrapper[4844]: I0126 13:22:29.717197 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fddb4ee-fddd-45f3-bc91-21073647af94-scripts\") pod \"9fddb4ee-fddd-45f3-bc91-21073647af94\" (UID: \"9fddb4ee-fddd-45f3-bc91-21073647af94\") " Jan 26 13:22:29 crc kubenswrapper[4844]: I0126 13:22:29.717278 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fddb4ee-fddd-45f3-bc91-21073647af94-combined-ca-bundle\") pod \"9fddb4ee-fddd-45f3-bc91-21073647af94\" (UID: \"9fddb4ee-fddd-45f3-bc91-21073647af94\") " Jan 26 13:22:29 crc kubenswrapper[4844]: I0126 13:22:29.717524 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fthwb\" (UniqueName: \"kubernetes.io/projected/9fddb4ee-fddd-45f3-bc91-21073647af94-kube-api-access-fthwb\") pod \"9fddb4ee-fddd-45f3-bc91-21073647af94\" (UID: \"9fddb4ee-fddd-45f3-bc91-21073647af94\") " Jan 26 13:22:29 crc kubenswrapper[4844]: I0126 13:22:29.717653 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fddb4ee-fddd-45f3-bc91-21073647af94-config-data\") pod \"9fddb4ee-fddd-45f3-bc91-21073647af94\" (UID: \"9fddb4ee-fddd-45f3-bc91-21073647af94\") " Jan 26 13:22:29 crc kubenswrapper[4844]: I0126 13:22:29.723393 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fddb4ee-fddd-45f3-bc91-21073647af94-kube-api-access-fthwb" (OuterVolumeSpecName: "kube-api-access-fthwb") pod "9fddb4ee-fddd-45f3-bc91-21073647af94" (UID: "9fddb4ee-fddd-45f3-bc91-21073647af94"). InnerVolumeSpecName "kube-api-access-fthwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:22:29 crc kubenswrapper[4844]: I0126 13:22:29.723893 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fddb4ee-fddd-45f3-bc91-21073647af94-scripts" (OuterVolumeSpecName: "scripts") pod "9fddb4ee-fddd-45f3-bc91-21073647af94" (UID: "9fddb4ee-fddd-45f3-bc91-21073647af94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:29 crc kubenswrapper[4844]: I0126 13:22:29.749377 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fddb4ee-fddd-45f3-bc91-21073647af94-config-data" (OuterVolumeSpecName: "config-data") pod "9fddb4ee-fddd-45f3-bc91-21073647af94" (UID: "9fddb4ee-fddd-45f3-bc91-21073647af94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:29 crc kubenswrapper[4844]: I0126 13:22:29.749715 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fddb4ee-fddd-45f3-bc91-21073647af94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fddb4ee-fddd-45f3-bc91-21073647af94" (UID: "9fddb4ee-fddd-45f3-bc91-21073647af94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:29 crc kubenswrapper[4844]: I0126 13:22:29.820464 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fddb4ee-fddd-45f3-bc91-21073647af94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:29 crc kubenswrapper[4844]: I0126 13:22:29.820525 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fthwb\" (UniqueName: \"kubernetes.io/projected/9fddb4ee-fddd-45f3-bc91-21073647af94-kube-api-access-fthwb\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:29 crc kubenswrapper[4844]: I0126 13:22:29.820550 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fddb4ee-fddd-45f3-bc91-21073647af94-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:29 crc kubenswrapper[4844]: I0126 13:22:29.820568 4844 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fddb4ee-fddd-45f3-bc91-21073647af94-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:30 crc kubenswrapper[4844]: I0126 13:22:30.214122 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qqmng" event={"ID":"9fddb4ee-fddd-45f3-bc91-21073647af94","Type":"ContainerDied","Data":"ec4e6fd6919d77db463941d6e3894dbb50de5fa1b6fc3201c4c4ccac798435bd"} Jan 26 13:22:30 crc kubenswrapper[4844]: I0126 13:22:30.214417 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec4e6fd6919d77db463941d6e3894dbb50de5fa1b6fc3201c4c4ccac798435bd" Jan 26 13:22:30 crc kubenswrapper[4844]: I0126 13:22:30.214747 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qqmng" Jan 26 13:22:30 crc kubenswrapper[4844]: I0126 13:22:30.431096 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:22:30 crc kubenswrapper[4844]: I0126 13:22:30.431319 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ee748334-3a17-43d6-92e0-335a6dcfe622" containerName="nova-api-log" containerID="cri-o://6a4906bb9ce46374379601c27310e666259c97c1de9b206368882a3a7d8f8fd7" gracePeriod=30 Jan 26 13:22:30 crc kubenswrapper[4844]: I0126 13:22:30.431404 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ee748334-3a17-43d6-92e0-335a6dcfe622" containerName="nova-api-api" containerID="cri-o://6c2e5a06dda62cfede5da7d482fb08aa84e625990dd741e10a54919ef5000e78" gracePeriod=30 Jan 26 13:22:30 crc kubenswrapper[4844]: I0126 13:22:30.459083 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:22:30 crc kubenswrapper[4844]: I0126 13:22:30.459653 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a7c1f674-6004-46ed-ad61-cbad8e9cb195" containerName="nova-scheduler-scheduler" containerID="cri-o://3758f7daf2748afde33dc84856c78e70d5af348f404e8862bd340a67fe9034cb" gracePeriod=30 Jan 26 13:22:30 crc kubenswrapper[4844]: I0126 13:22:30.514217 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:22:30 crc kubenswrapper[4844]: I0126 13:22:30.522852 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" containerName="nova-metadata-log" containerID="cri-o://bb906dacda948788b140e028e27afb181f6ba4bf6c363c83ef3924519bb24ea8" gracePeriod=30 Jan 26 13:22:30 crc kubenswrapper[4844]: I0126 13:22:30.522915 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" containerName="nova-metadata-metadata" containerID="cri-o://0b3e9cfd506d1d008fe20e0b66a3b7fe3232162f525567e8765b778453fc42f5" gracePeriod=30 Jan 26 13:22:31 crc kubenswrapper[4844]: I0126 13:22:31.225046 4844 generic.go:334] "Generic (PLEG): container finished" podID="3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" containerID="bb906dacda948788b140e028e27afb181f6ba4bf6c363c83ef3924519bb24ea8" exitCode=143 Jan 26 13:22:31 crc kubenswrapper[4844]: I0126 13:22:31.225096 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67","Type":"ContainerDied","Data":"bb906dacda948788b140e028e27afb181f6ba4bf6c363c83ef3924519bb24ea8"} Jan 26 13:22:31 crc kubenswrapper[4844]: I0126 13:22:31.993220 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": dial tcp 10.217.0.214:8775: connect: connection refused" Jan 26 13:22:31 crc kubenswrapper[4844]: I0126 13:22:31.993345 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": dial tcp 10.217.0.214:8775: connect: connection refused" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.238003 4844 generic.go:334] "Generic (PLEG): container finished" podID="ee748334-3a17-43d6-92e0-335a6dcfe622" containerID="6c2e5a06dda62cfede5da7d482fb08aa84e625990dd741e10a54919ef5000e78" exitCode=0 Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.238034 4844 generic.go:334] "Generic (PLEG): container finished" podID="ee748334-3a17-43d6-92e0-335a6dcfe622" containerID="6a4906bb9ce46374379601c27310e666259c97c1de9b206368882a3a7d8f8fd7" exitCode=143 Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.238098 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee748334-3a17-43d6-92e0-335a6dcfe622","Type":"ContainerDied","Data":"6c2e5a06dda62cfede5da7d482fb08aa84e625990dd741e10a54919ef5000e78"} Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.238129 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee748334-3a17-43d6-92e0-335a6dcfe622","Type":"ContainerDied","Data":"6a4906bb9ce46374379601c27310e666259c97c1de9b206368882a3a7d8f8fd7"} Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.240868 4844 generic.go:334] "Generic (PLEG): container finished" podID="3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" containerID="0b3e9cfd506d1d008fe20e0b66a3b7fe3232162f525567e8765b778453fc42f5" exitCode=0 Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.240919 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67","Type":"ContainerDied","Data":"0b3e9cfd506d1d008fe20e0b66a3b7fe3232162f525567e8765b778453fc42f5"} Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.458185 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.471591 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-combined-ca-bundle\") pod \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.471806 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-logs\") pod \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.471854 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-nova-metadata-tls-certs\") pod \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.471946 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jkjp\" (UniqueName: \"kubernetes.io/projected/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-kube-api-access-6jkjp\") pod \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.471971 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-config-data\") pod \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\" (UID: \"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67\") " Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.472378 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-logs" (OuterVolumeSpecName: "logs") pod "3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" (UID: "3d0355b5-96ed-47bd-9d3e-25f2cbfebb67"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.472798 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.496792 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-kube-api-access-6jkjp" (OuterVolumeSpecName: "kube-api-access-6jkjp") pod "3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" (UID: "3d0355b5-96ed-47bd-9d3e-25f2cbfebb67"). InnerVolumeSpecName "kube-api-access-6jkjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.529838 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-config-data" (OuterVolumeSpecName: "config-data") pod "3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" (UID: "3d0355b5-96ed-47bd-9d3e-25f2cbfebb67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.536622 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" (UID: "3d0355b5-96ed-47bd-9d3e-25f2cbfebb67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.571971 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" (UID: "3d0355b5-96ed-47bd-9d3e-25f2cbfebb67"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.574590 4844 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.574633 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jkjp\" (UniqueName: \"kubernetes.io/projected/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-kube-api-access-6jkjp\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.574644 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.574653 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.631398 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.675561 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-combined-ca-bundle\") pod \"ee748334-3a17-43d6-92e0-335a6dcfe622\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.675766 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg7zr\" (UniqueName: \"kubernetes.io/projected/ee748334-3a17-43d6-92e0-335a6dcfe622-kube-api-access-gg7zr\") pod \"ee748334-3a17-43d6-92e0-335a6dcfe622\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.675846 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-public-tls-certs\") pod \"ee748334-3a17-43d6-92e0-335a6dcfe622\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.675901 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-internal-tls-certs\") pod \"ee748334-3a17-43d6-92e0-335a6dcfe622\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.675942 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-config-data\") pod \"ee748334-3a17-43d6-92e0-335a6dcfe622\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.675961 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee748334-3a17-43d6-92e0-335a6dcfe622-logs\") pod \"ee748334-3a17-43d6-92e0-335a6dcfe622\" (UID: \"ee748334-3a17-43d6-92e0-335a6dcfe622\") " Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.676745 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee748334-3a17-43d6-92e0-335a6dcfe622-logs" (OuterVolumeSpecName: "logs") pod "ee748334-3a17-43d6-92e0-335a6dcfe622" (UID: "ee748334-3a17-43d6-92e0-335a6dcfe622"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.679087 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee748334-3a17-43d6-92e0-335a6dcfe622-kube-api-access-gg7zr" (OuterVolumeSpecName: "kube-api-access-gg7zr") pod "ee748334-3a17-43d6-92e0-335a6dcfe622" (UID: "ee748334-3a17-43d6-92e0-335a6dcfe622"). InnerVolumeSpecName "kube-api-access-gg7zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.707499 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee748334-3a17-43d6-92e0-335a6dcfe622" (UID: "ee748334-3a17-43d6-92e0-335a6dcfe622"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.710532 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-config-data" (OuterVolumeSpecName: "config-data") pod "ee748334-3a17-43d6-92e0-335a6dcfe622" (UID: "ee748334-3a17-43d6-92e0-335a6dcfe622"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.731753 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ee748334-3a17-43d6-92e0-335a6dcfe622" (UID: "ee748334-3a17-43d6-92e0-335a6dcfe622"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.737530 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ee748334-3a17-43d6-92e0-335a6dcfe622" (UID: "ee748334-3a17-43d6-92e0-335a6dcfe622"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.778233 4844 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.778443 4844 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.778517 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.778795 4844 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee748334-3a17-43d6-92e0-335a6dcfe622-logs\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.778910 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee748334-3a17-43d6-92e0-335a6dcfe622-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:32 crc kubenswrapper[4844]: I0126 13:22:32.779038 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg7zr\" (UniqueName: \"kubernetes.io/projected/ee748334-3a17-43d6-92e0-335a6dcfe622-kube-api-access-gg7zr\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.253302 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d0355b5-96ed-47bd-9d3e-25f2cbfebb67","Type":"ContainerDied","Data":"812c4a987be88c8f7cf5b337580367aa6704bd0c41d676e57074a89d671ac56c"} Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.253316 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.253366 4844 scope.go:117] "RemoveContainer" containerID="0b3e9cfd506d1d008fe20e0b66a3b7fe3232162f525567e8765b778453fc42f5" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.257953 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee748334-3a17-43d6-92e0-335a6dcfe622","Type":"ContainerDied","Data":"9379d3bdc358153aa220668e4d425b9778ce73a1584a184b5420357a7eb78d72"} Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.258056 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.278005 4844 scope.go:117] "RemoveContainer" containerID="bb906dacda948788b140e028e27afb181f6ba4bf6c363c83ef3924519bb24ea8" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.298661 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.305484 4844 scope.go:117] "RemoveContainer" containerID="6c2e5a06dda62cfede5da7d482fb08aa84e625990dd741e10a54919ef5000e78" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.306714 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.397315 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee748334-3a17-43d6-92e0-335a6dcfe622" path="/var/lib/kubelet/pods/ee748334-3a17-43d6-92e0-335a6dcfe622/volumes" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.398791 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.399565 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.399926 4844 scope.go:117] "RemoveContainer" containerID="6a4906bb9ce46374379601c27310e666259c97c1de9b206368882a3a7d8f8fd7" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.414137 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 13:22:33 crc kubenswrapper[4844]: E0126 13:22:33.414753 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" containerName="nova-metadata-metadata" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.414770 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" containerName="nova-metadata-metadata" Jan 26 13:22:33 crc kubenswrapper[4844]: E0126 13:22:33.414784 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee748334-3a17-43d6-92e0-335a6dcfe622" containerName="nova-api-api" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.414790 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee748334-3a17-43d6-92e0-335a6dcfe622" containerName="nova-api-api" Jan 26 13:22:33 crc kubenswrapper[4844]: E0126 13:22:33.414801 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68596a47-7ecd-431f-8b10-00479d94c556" containerName="init" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.414807 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="68596a47-7ecd-431f-8b10-00479d94c556" containerName="init" Jan 26 13:22:33 crc kubenswrapper[4844]: E0126 13:22:33.414816 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fddb4ee-fddd-45f3-bc91-21073647af94" containerName="nova-manage" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.414822 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fddb4ee-fddd-45f3-bc91-21073647af94" containerName="nova-manage" Jan 26 13:22:33 crc kubenswrapper[4844]: E0126 13:22:33.414838 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68596a47-7ecd-431f-8b10-00479d94c556" containerName="dnsmasq-dns" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.414844 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="68596a47-7ecd-431f-8b10-00479d94c556" containerName="dnsmasq-dns" Jan 26 13:22:33 crc kubenswrapper[4844]: E0126 13:22:33.414851 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee748334-3a17-43d6-92e0-335a6dcfe622" containerName="nova-api-log" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.414856 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee748334-3a17-43d6-92e0-335a6dcfe622" containerName="nova-api-log" Jan 26 13:22:33 crc kubenswrapper[4844]: E0126 13:22:33.414875 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" containerName="nova-metadata-log" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.414881 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" containerName="nova-metadata-log" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.415052 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee748334-3a17-43d6-92e0-335a6dcfe622" containerName="nova-api-api" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.415062 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee748334-3a17-43d6-92e0-335a6dcfe622" containerName="nova-api-log" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.415078 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="68596a47-7ecd-431f-8b10-00479d94c556" containerName="dnsmasq-dns" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.415089 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" containerName="nova-metadata-metadata" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.415100 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" containerName="nova-metadata-log" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.415115 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fddb4ee-fddd-45f3-bc91-21073647af94" containerName="nova-manage" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.416454 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.419838 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.420189 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.420851 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.442996 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.453696 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.455407 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.457146 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.458357 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.466511 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.503791 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86421d71-6636-4491-9b3e-7b4e3bf39ee9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"86421d71-6636-4491-9b3e-7b4e3bf39ee9\") " pod="openstack/nova-metadata-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.503853 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86421d71-6636-4491-9b3e-7b4e3bf39ee9-config-data\") pod \"nova-metadata-0\" (UID: \"86421d71-6636-4491-9b3e-7b4e3bf39ee9\") " pod="openstack/nova-metadata-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.504006 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86421d71-6636-4491-9b3e-7b4e3bf39ee9-logs\") pod \"nova-metadata-0\" (UID: \"86421d71-6636-4491-9b3e-7b4e3bf39ee9\") " pod="openstack/nova-metadata-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.504025 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z86j7\" (UniqueName: \"kubernetes.io/projected/86421d71-6636-4491-9b3e-7b4e3bf39ee9-kube-api-access-z86j7\") pod \"nova-metadata-0\" (UID: \"86421d71-6636-4491-9b3e-7b4e3bf39ee9\") " pod="openstack/nova-metadata-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.504100 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ea8f8d-3955-4fc3-8e6b-412d0bec4995-internal-tls-certs\") pod \"nova-api-0\" (UID: \"81ea8f8d-3955-4fc3-8e6b-412d0bec4995\") " pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.504197 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ea8f8d-3955-4fc3-8e6b-412d0bec4995-config-data\") pod \"nova-api-0\" (UID: \"81ea8f8d-3955-4fc3-8e6b-412d0bec4995\") " pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.504259 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/86421d71-6636-4491-9b3e-7b4e3bf39ee9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"86421d71-6636-4491-9b3e-7b4e3bf39ee9\") " pod="openstack/nova-metadata-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.504283 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ea8f8d-3955-4fc3-8e6b-412d0bec4995-logs\") pod \"nova-api-0\" (UID: \"81ea8f8d-3955-4fc3-8e6b-412d0bec4995\") " pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.504306 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ea8f8d-3955-4fc3-8e6b-412d0bec4995-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"81ea8f8d-3955-4fc3-8e6b-412d0bec4995\") " pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.504325 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ea8f8d-3955-4fc3-8e6b-412d0bec4995-public-tls-certs\") pod \"nova-api-0\" (UID: \"81ea8f8d-3955-4fc3-8e6b-412d0bec4995\") " pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.504395 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp769\" (UniqueName: \"kubernetes.io/projected/81ea8f8d-3955-4fc3-8e6b-412d0bec4995-kube-api-access-fp769\") pod \"nova-api-0\" (UID: \"81ea8f8d-3955-4fc3-8e6b-412d0bec4995\") " pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.605464 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86421d71-6636-4491-9b3e-7b4e3bf39ee9-logs\") pod \"nova-metadata-0\" (UID: \"86421d71-6636-4491-9b3e-7b4e3bf39ee9\") " pod="openstack/nova-metadata-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.605514 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z86j7\" (UniqueName: \"kubernetes.io/projected/86421d71-6636-4491-9b3e-7b4e3bf39ee9-kube-api-access-z86j7\") pod \"nova-metadata-0\" (UID: \"86421d71-6636-4491-9b3e-7b4e3bf39ee9\") " pod="openstack/nova-metadata-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.605547 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ea8f8d-3955-4fc3-8e6b-412d0bec4995-internal-tls-certs\") pod \"nova-api-0\" (UID: \"81ea8f8d-3955-4fc3-8e6b-412d0bec4995\") " pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.605592 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ea8f8d-3955-4fc3-8e6b-412d0bec4995-config-data\") pod \"nova-api-0\" (UID: \"81ea8f8d-3955-4fc3-8e6b-412d0bec4995\") " pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.605669 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/86421d71-6636-4491-9b3e-7b4e3bf39ee9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"86421d71-6636-4491-9b3e-7b4e3bf39ee9\") " pod="openstack/nova-metadata-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.606402 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ea8f8d-3955-4fc3-8e6b-412d0bec4995-logs\") pod \"nova-api-0\" (UID: \"81ea8f8d-3955-4fc3-8e6b-412d0bec4995\") " pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.606475 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86421d71-6636-4491-9b3e-7b4e3bf39ee9-logs\") pod \"nova-metadata-0\" (UID: \"86421d71-6636-4491-9b3e-7b4e3bf39ee9\") " pod="openstack/nova-metadata-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.608532 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ea8f8d-3955-4fc3-8e6b-412d0bec4995-logs\") pod \"nova-api-0\" (UID: \"81ea8f8d-3955-4fc3-8e6b-412d0bec4995\") " pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.608646 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ea8f8d-3955-4fc3-8e6b-412d0bec4995-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"81ea8f8d-3955-4fc3-8e6b-412d0bec4995\") " pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.608669 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ea8f8d-3955-4fc3-8e6b-412d0bec4995-public-tls-certs\") pod \"nova-api-0\" (UID: \"81ea8f8d-3955-4fc3-8e6b-412d0bec4995\") " pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.608787 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp769\" (UniqueName: \"kubernetes.io/projected/81ea8f8d-3955-4fc3-8e6b-412d0bec4995-kube-api-access-fp769\") pod \"nova-api-0\" (UID: \"81ea8f8d-3955-4fc3-8e6b-412d0bec4995\") " pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.609475 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ea8f8d-3955-4fc3-8e6b-412d0bec4995-internal-tls-certs\") pod \"nova-api-0\" (UID: \"81ea8f8d-3955-4fc3-8e6b-412d0bec4995\") " pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.609844 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86421d71-6636-4491-9b3e-7b4e3bf39ee9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"86421d71-6636-4491-9b3e-7b4e3bf39ee9\") " pod="openstack/nova-metadata-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.609892 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86421d71-6636-4491-9b3e-7b4e3bf39ee9-config-data\") pod \"nova-metadata-0\" (UID: \"86421d71-6636-4491-9b3e-7b4e3bf39ee9\") " pod="openstack/nova-metadata-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.612079 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ea8f8d-3955-4fc3-8e6b-412d0bec4995-config-data\") pod \"nova-api-0\" (UID: \"81ea8f8d-3955-4fc3-8e6b-412d0bec4995\") " pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.612504 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86421d71-6636-4491-9b3e-7b4e3bf39ee9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"86421d71-6636-4491-9b3e-7b4e3bf39ee9\") " pod="openstack/nova-metadata-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.612862 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ea8f8d-3955-4fc3-8e6b-412d0bec4995-public-tls-certs\") pod \"nova-api-0\" (UID: \"81ea8f8d-3955-4fc3-8e6b-412d0bec4995\") " pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.613420 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ea8f8d-3955-4fc3-8e6b-412d0bec4995-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"81ea8f8d-3955-4fc3-8e6b-412d0bec4995\") " pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.613514 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86421d71-6636-4491-9b3e-7b4e3bf39ee9-config-data\") pod \"nova-metadata-0\" (UID: \"86421d71-6636-4491-9b3e-7b4e3bf39ee9\") " pod="openstack/nova-metadata-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.616642 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/86421d71-6636-4491-9b3e-7b4e3bf39ee9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"86421d71-6636-4491-9b3e-7b4e3bf39ee9\") " pod="openstack/nova-metadata-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.624923 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z86j7\" (UniqueName: \"kubernetes.io/projected/86421d71-6636-4491-9b3e-7b4e3bf39ee9-kube-api-access-z86j7\") pod \"nova-metadata-0\" (UID: \"86421d71-6636-4491-9b3e-7b4e3bf39ee9\") " pod="openstack/nova-metadata-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.627493 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp769\" (UniqueName: \"kubernetes.io/projected/81ea8f8d-3955-4fc3-8e6b-412d0bec4995-kube-api-access-fp769\") pod \"nova-api-0\" (UID: \"81ea8f8d-3955-4fc3-8e6b-412d0bec4995\") " pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.739568 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 13:22:33 crc kubenswrapper[4844]: I0126 13:22:33.776006 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 13:22:34 crc kubenswrapper[4844]: I0126 13:22:34.291874 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 13:22:34 crc kubenswrapper[4844]: I0126 13:22:34.326061 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.034820 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.139787 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7c1f674-6004-46ed-ad61-cbad8e9cb195-config-data\") pod \"a7c1f674-6004-46ed-ad61-cbad8e9cb195\" (UID: \"a7c1f674-6004-46ed-ad61-cbad8e9cb195\") " Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.139920 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dcdk\" (UniqueName: \"kubernetes.io/projected/a7c1f674-6004-46ed-ad61-cbad8e9cb195-kube-api-access-9dcdk\") pod \"a7c1f674-6004-46ed-ad61-cbad8e9cb195\" (UID: \"a7c1f674-6004-46ed-ad61-cbad8e9cb195\") " Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.140005 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c1f674-6004-46ed-ad61-cbad8e9cb195-combined-ca-bundle\") pod \"a7c1f674-6004-46ed-ad61-cbad8e9cb195\" (UID: \"a7c1f674-6004-46ed-ad61-cbad8e9cb195\") " Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.144920 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c1f674-6004-46ed-ad61-cbad8e9cb195-kube-api-access-9dcdk" (OuterVolumeSpecName: "kube-api-access-9dcdk") pod "a7c1f674-6004-46ed-ad61-cbad8e9cb195" (UID: "a7c1f674-6004-46ed-ad61-cbad8e9cb195"). InnerVolumeSpecName "kube-api-access-9dcdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.178276 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c1f674-6004-46ed-ad61-cbad8e9cb195-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7c1f674-6004-46ed-ad61-cbad8e9cb195" (UID: "a7c1f674-6004-46ed-ad61-cbad8e9cb195"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.186044 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c1f674-6004-46ed-ad61-cbad8e9cb195-config-data" (OuterVolumeSpecName: "config-data") pod "a7c1f674-6004-46ed-ad61-cbad8e9cb195" (UID: "a7c1f674-6004-46ed-ad61-cbad8e9cb195"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.242633 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dcdk\" (UniqueName: \"kubernetes.io/projected/a7c1f674-6004-46ed-ad61-cbad8e9cb195-kube-api-access-9dcdk\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.242666 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c1f674-6004-46ed-ad61-cbad8e9cb195-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.242675 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7c1f674-6004-46ed-ad61-cbad8e9cb195-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.280872 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"81ea8f8d-3955-4fc3-8e6b-412d0bec4995","Type":"ContainerStarted","Data":"c7d0c52320744cdbb2d6d0beec261e921b8d431592599ef25d758518c77c055a"} Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.280916 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"81ea8f8d-3955-4fc3-8e6b-412d0bec4995","Type":"ContainerStarted","Data":"b090c52dad0151f2eb31edf373e20ee4a307b06387ae1f6467654b8421680975"} Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.280926 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"81ea8f8d-3955-4fc3-8e6b-412d0bec4995","Type":"ContainerStarted","Data":"298c59d48fb65ffdaf202f749870ac9ba5606b37b6840feb285eb78e0f6877ea"} Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.282698 4844 generic.go:334] "Generic (PLEG): container finished" podID="a7c1f674-6004-46ed-ad61-cbad8e9cb195" containerID="3758f7daf2748afde33dc84856c78e70d5af348f404e8862bd340a67fe9034cb" exitCode=0 Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.282754 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.282770 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7c1f674-6004-46ed-ad61-cbad8e9cb195","Type":"ContainerDied","Data":"3758f7daf2748afde33dc84856c78e70d5af348f404e8862bd340a67fe9034cb"} Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.282802 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7c1f674-6004-46ed-ad61-cbad8e9cb195","Type":"ContainerDied","Data":"ed635a56cf5d075e4ef31d3d72dac58cef9ee6ba2e408cbd6f9e9b7b0d40cad0"} Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.282837 4844 scope.go:117] "RemoveContainer" containerID="3758f7daf2748afde33dc84856c78e70d5af348f404e8862bd340a67fe9034cb" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.284912 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"86421d71-6636-4491-9b3e-7b4e3bf39ee9","Type":"ContainerStarted","Data":"0ce40ff0de31223d2858a9fed29db17150db3aa660a09bcd3c99754af21b7282"} Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.284946 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"86421d71-6636-4491-9b3e-7b4e3bf39ee9","Type":"ContainerStarted","Data":"c436040f55736d42b370dd69637d330e1c7a69b3f53f136652b39cacf7313475"} Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.284955 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"86421d71-6636-4491-9b3e-7b4e3bf39ee9","Type":"ContainerStarted","Data":"55ef057936f36cf345caa84f4e6f37a0c630184bc697890461ebd5ae709f6234"} Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.301213 4844 scope.go:117] "RemoveContainer" containerID="3758f7daf2748afde33dc84856c78e70d5af348f404e8862bd340a67fe9034cb" Jan 26 13:22:35 crc kubenswrapper[4844]: E0126 13:22:35.302004 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3758f7daf2748afde33dc84856c78e70d5af348f404e8862bd340a67fe9034cb\": container with ID starting with 3758f7daf2748afde33dc84856c78e70d5af348f404e8862bd340a67fe9034cb not found: ID does not exist" containerID="3758f7daf2748afde33dc84856c78e70d5af348f404e8862bd340a67fe9034cb" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.302088 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3758f7daf2748afde33dc84856c78e70d5af348f404e8862bd340a67fe9034cb"} err="failed to get container status \"3758f7daf2748afde33dc84856c78e70d5af348f404e8862bd340a67fe9034cb\": rpc error: code = NotFound desc = could not find container \"3758f7daf2748afde33dc84856c78e70d5af348f404e8862bd340a67fe9034cb\": container with ID starting with 3758f7daf2748afde33dc84856c78e70d5af348f404e8862bd340a67fe9034cb not found: ID does not exist" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.316348 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.31633272 podStartE2EDuration="2.31633272s" podCreationTimestamp="2026-01-26 13:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:22:35.307715772 +0000 UTC m=+2332.241083404" watchObservedRunningTime="2026-01-26 13:22:35.31633272 +0000 UTC m=+2332.249700332" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.325578 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d0355b5-96ed-47bd-9d3e-25f2cbfebb67" path="/var/lib/kubelet/pods/3d0355b5-96ed-47bd-9d3e-25f2cbfebb67/volumes" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.334240 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.334215153 podStartE2EDuration="2.334215153s" podCreationTimestamp="2026-01-26 13:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:22:35.32828606 +0000 UTC m=+2332.261653682" watchObservedRunningTime="2026-01-26 13:22:35.334215153 +0000 UTC m=+2332.267582765" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.353699 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.364738 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.377969 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:22:35 crc kubenswrapper[4844]: E0126 13:22:35.378473 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c1f674-6004-46ed-ad61-cbad8e9cb195" containerName="nova-scheduler-scheduler" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.378499 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c1f674-6004-46ed-ad61-cbad8e9cb195" containerName="nova-scheduler-scheduler" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.378751 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c1f674-6004-46ed-ad61-cbad8e9cb195" containerName="nova-scheduler-scheduler" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.379552 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.382030 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.389751 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.449056 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42cc1780-3fb5-4158-95f2-5a1bd4e1161f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"42cc1780-3fb5-4158-95f2-5a1bd4e1161f\") " pod="openstack/nova-scheduler-0" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.449159 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42cc1780-3fb5-4158-95f2-5a1bd4e1161f-config-data\") pod \"nova-scheduler-0\" (UID: \"42cc1780-3fb5-4158-95f2-5a1bd4e1161f\") " pod="openstack/nova-scheduler-0" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.449421 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49zpr\" (UniqueName: \"kubernetes.io/projected/42cc1780-3fb5-4158-95f2-5a1bd4e1161f-kube-api-access-49zpr\") pod \"nova-scheduler-0\" (UID: \"42cc1780-3fb5-4158-95f2-5a1bd4e1161f\") " pod="openstack/nova-scheduler-0" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.551544 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42cc1780-3fb5-4158-95f2-5a1bd4e1161f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"42cc1780-3fb5-4158-95f2-5a1bd4e1161f\") " pod="openstack/nova-scheduler-0" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.551631 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42cc1780-3fb5-4158-95f2-5a1bd4e1161f-config-data\") pod \"nova-scheduler-0\" (UID: \"42cc1780-3fb5-4158-95f2-5a1bd4e1161f\") " pod="openstack/nova-scheduler-0" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.551675 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49zpr\" (UniqueName: \"kubernetes.io/projected/42cc1780-3fb5-4158-95f2-5a1bd4e1161f-kube-api-access-49zpr\") pod \"nova-scheduler-0\" (UID: \"42cc1780-3fb5-4158-95f2-5a1bd4e1161f\") " pod="openstack/nova-scheduler-0" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.557130 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42cc1780-3fb5-4158-95f2-5a1bd4e1161f-config-data\") pod \"nova-scheduler-0\" (UID: \"42cc1780-3fb5-4158-95f2-5a1bd4e1161f\") " pod="openstack/nova-scheduler-0" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.560379 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42cc1780-3fb5-4158-95f2-5a1bd4e1161f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"42cc1780-3fb5-4158-95f2-5a1bd4e1161f\") " pod="openstack/nova-scheduler-0" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.570772 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49zpr\" (UniqueName: \"kubernetes.io/projected/42cc1780-3fb5-4158-95f2-5a1bd4e1161f-kube-api-access-49zpr\") pod \"nova-scheduler-0\" (UID: \"42cc1780-3fb5-4158-95f2-5a1bd4e1161f\") " pod="openstack/nova-scheduler-0" Jan 26 13:22:35 crc kubenswrapper[4844]: I0126 13:22:35.749081 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 13:22:36 crc kubenswrapper[4844]: I0126 13:22:36.212546 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 13:22:36 crc kubenswrapper[4844]: W0126 13:22:36.218702 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42cc1780_3fb5_4158_95f2_5a1bd4e1161f.slice/crio-97e7be70b0201d08a5110f0c165855f0bb2c6f5059d5ca6260f073f5df3c9cf4 WatchSource:0}: Error finding container 97e7be70b0201d08a5110f0c165855f0bb2c6f5059d5ca6260f073f5df3c9cf4: Status 404 returned error can't find the container with id 97e7be70b0201d08a5110f0c165855f0bb2c6f5059d5ca6260f073f5df3c9cf4 Jan 26 13:22:36 crc kubenswrapper[4844]: I0126 13:22:36.319805 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"42cc1780-3fb5-4158-95f2-5a1bd4e1161f","Type":"ContainerStarted","Data":"97e7be70b0201d08a5110f0c165855f0bb2c6f5059d5ca6260f073f5df3c9cf4"} Jan 26 13:22:37 crc kubenswrapper[4844]: I0126 13:22:37.324883 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c1f674-6004-46ed-ad61-cbad8e9cb195" path="/var/lib/kubelet/pods/a7c1f674-6004-46ed-ad61-cbad8e9cb195/volumes" Jan 26 13:22:37 crc kubenswrapper[4844]: I0126 13:22:37.329836 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"42cc1780-3fb5-4158-95f2-5a1bd4e1161f","Type":"ContainerStarted","Data":"ce93b40ee21b8a0c22d32b17d9d010c2bfaba705a6df0ca500a3baf01862c1ef"} Jan 26 13:22:37 crc kubenswrapper[4844]: I0126 13:22:37.364191 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.364168106 podStartE2EDuration="2.364168106s" podCreationTimestamp="2026-01-26 13:22:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:22:37.353011716 +0000 UTC m=+2334.286379348" watchObservedRunningTime="2026-01-26 13:22:37.364168106 +0000 UTC m=+2334.297535718" Jan 26 13:22:38 crc kubenswrapper[4844]: I0126 13:22:38.777072 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 13:22:38 crc kubenswrapper[4844]: I0126 13:22:38.778394 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 13:22:39 crc kubenswrapper[4844]: I0126 13:22:39.319921 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:22:39 crc kubenswrapper[4844]: E0126 13:22:39.320183 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:22:40 crc kubenswrapper[4844]: I0126 13:22:40.750356 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 13:22:43 crc kubenswrapper[4844]: I0126 13:22:43.740269 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 13:22:43 crc kubenswrapper[4844]: I0126 13:22:43.741399 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 13:22:43 crc kubenswrapper[4844]: I0126 13:22:43.777135 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 13:22:43 crc kubenswrapper[4844]: I0126 13:22:43.777188 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 13:22:44 crc kubenswrapper[4844]: I0126 13:22:44.752720 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="81ea8f8d-3955-4fc3-8e6b-412d0bec4995" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 13:22:44 crc kubenswrapper[4844]: I0126 13:22:44.752724 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="81ea8f8d-3955-4fc3-8e6b-412d0bec4995" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 13:22:44 crc kubenswrapper[4844]: I0126 13:22:44.789752 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="86421d71-6636-4491-9b3e-7b4e3bf39ee9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 13:22:44 crc kubenswrapper[4844]: I0126 13:22:44.789742 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="86421d71-6636-4491-9b3e-7b4e3bf39ee9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 13:22:45 crc kubenswrapper[4844]: I0126 13:22:45.750216 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 13:22:45 crc kubenswrapper[4844]: I0126 13:22:45.796194 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 13:22:46 crc kubenswrapper[4844]: I0126 13:22:46.888022 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 13:22:49 crc kubenswrapper[4844]: I0126 13:22:49.456847 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 13:22:53 crc kubenswrapper[4844]: I0126 13:22:53.756872 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 13:22:53 crc kubenswrapper[4844]: I0126 13:22:53.758082 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 13:22:53 crc kubenswrapper[4844]: I0126 13:22:53.761343 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 13:22:53 crc kubenswrapper[4844]: I0126 13:22:53.774545 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 13:22:53 crc kubenswrapper[4844]: I0126 13:22:53.806965 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 13:22:53 crc kubenswrapper[4844]: I0126 13:22:53.807407 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 13:22:53 crc kubenswrapper[4844]: I0126 13:22:53.815886 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 13:22:53 crc kubenswrapper[4844]: I0126 13:22:53.934576 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 13:22:53 crc kubenswrapper[4844]: I0126 13:22:53.942217 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 13:22:53 crc kubenswrapper[4844]: I0126 13:22:53.947276 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 13:22:54 crc kubenswrapper[4844]: I0126 13:22:54.313457 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:22:54 crc kubenswrapper[4844]: E0126 13:22:54.313976 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:23:02 crc kubenswrapper[4844]: I0126 13:23:02.987856 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 13:23:04 crc kubenswrapper[4844]: I0126 13:23:04.801406 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 13:23:06 crc kubenswrapper[4844]: I0126 13:23:06.767490 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="e48f1161-14d0-42c1-b6ac-bdb8bce26985" containerName="rabbitmq" containerID="cri-o://49224d76c481ef910732446c51b497a3bc7254c88cb8cd2720780911497c6963" gracePeriod=604797 Jan 26 13:23:07 crc kubenswrapper[4844]: I0126 13:23:07.313293 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:23:07 crc kubenswrapper[4844]: E0126 13:23:07.313567 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.099684 4844 generic.go:334] "Generic (PLEG): container finished" podID="e48f1161-14d0-42c1-b6ac-bdb8bce26985" containerID="49224d76c481ef910732446c51b497a3bc7254c88cb8cd2720780911497c6963" exitCode=0 Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.099779 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e48f1161-14d0-42c1-b6ac-bdb8bce26985","Type":"ContainerDied","Data":"49224d76c481ef910732446c51b497a3bc7254c88cb8cd2720780911497c6963"} Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.382290 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="e8e36a62-9367-4c94-9aff-de8e6166af27" containerName="rabbitmq" containerID="cri-o://2758d64ef9dfa428b02a999acaca19c0ab43f356ea26d72de994d5e96fc426e1" gracePeriod=604797 Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.555253 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.656928 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e48f1161-14d0-42c1-b6ac-bdb8bce26985-server-conf\") pod \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.657349 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-plugins\") pod \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.657434 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-erlang-cookie\") pod \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.657512 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e48f1161-14d0-42c1-b6ac-bdb8bce26985-pod-info\") pod \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.657688 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4726\" (UniqueName: \"kubernetes.io/projected/e48f1161-14d0-42c1-b6ac-bdb8bce26985-kube-api-access-l4726\") pod \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.657739 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.657778 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-confd\") pod \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.657889 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-tls\") pod \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.658083 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e48f1161-14d0-42c1-b6ac-bdb8bce26985-config-data\") pod \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.658129 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e48f1161-14d0-42c1-b6ac-bdb8bce26985-erlang-cookie-secret\") pod \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.658170 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e48f1161-14d0-42c1-b6ac-bdb8bce26985-plugins-conf\") pod \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\" (UID: \"e48f1161-14d0-42c1-b6ac-bdb8bce26985\") " Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.660063 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e48f1161-14d0-42c1-b6ac-bdb8bce26985-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e48f1161-14d0-42c1-b6ac-bdb8bce26985" (UID: "e48f1161-14d0-42c1-b6ac-bdb8bce26985"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.660541 4844 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e48f1161-14d0-42c1-b6ac-bdb8bce26985-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.660902 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e48f1161-14d0-42c1-b6ac-bdb8bce26985" (UID: "e48f1161-14d0-42c1-b6ac-bdb8bce26985"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.661353 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e48f1161-14d0-42c1-b6ac-bdb8bce26985" (UID: "e48f1161-14d0-42c1-b6ac-bdb8bce26985"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.668471 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e48f1161-14d0-42c1-b6ac-bdb8bce26985" (UID: "e48f1161-14d0-42c1-b6ac-bdb8bce26985"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.669402 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e48f1161-14d0-42c1-b6ac-bdb8bce26985-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e48f1161-14d0-42c1-b6ac-bdb8bce26985" (UID: "e48f1161-14d0-42c1-b6ac-bdb8bce26985"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.679883 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e48f1161-14d0-42c1-b6ac-bdb8bce26985-pod-info" (OuterVolumeSpecName: "pod-info") pod "e48f1161-14d0-42c1-b6ac-bdb8bce26985" (UID: "e48f1161-14d0-42c1-b6ac-bdb8bce26985"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.680403 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e48f1161-14d0-42c1-b6ac-bdb8bce26985-kube-api-access-l4726" (OuterVolumeSpecName: "kube-api-access-l4726") pod "e48f1161-14d0-42c1-b6ac-bdb8bce26985" (UID: "e48f1161-14d0-42c1-b6ac-bdb8bce26985"). InnerVolumeSpecName "kube-api-access-l4726". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.689301 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "e48f1161-14d0-42c1-b6ac-bdb8bce26985" (UID: "e48f1161-14d0-42c1-b6ac-bdb8bce26985"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.752830 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e48f1161-14d0-42c1-b6ac-bdb8bce26985-config-data" (OuterVolumeSpecName: "config-data") pod "e48f1161-14d0-42c1-b6ac-bdb8bce26985" (UID: "e48f1161-14d0-42c1-b6ac-bdb8bce26985"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.762322 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e48f1161-14d0-42c1-b6ac-bdb8bce26985-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.762361 4844 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e48f1161-14d0-42c1-b6ac-bdb8bce26985-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.762374 4844 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.762384 4844 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.762392 4844 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e48f1161-14d0-42c1-b6ac-bdb8bce26985-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.762403 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4726\" (UniqueName: \"kubernetes.io/projected/e48f1161-14d0-42c1-b6ac-bdb8bce26985-kube-api-access-l4726\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.762428 4844 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.762439 4844 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.792552 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e48f1161-14d0-42c1-b6ac-bdb8bce26985-server-conf" (OuterVolumeSpecName: "server-conf") pod "e48f1161-14d0-42c1-b6ac-bdb8bce26985" (UID: "e48f1161-14d0-42c1-b6ac-bdb8bce26985"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.795216 4844 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.838157 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e48f1161-14d0-42c1-b6ac-bdb8bce26985" (UID: "e48f1161-14d0-42c1-b6ac-bdb8bce26985"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.864234 4844 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.864281 4844 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e48f1161-14d0-42c1-b6ac-bdb8bce26985-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:08 crc kubenswrapper[4844]: I0126 13:23:08.864291 4844 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e48f1161-14d0-42c1-b6ac-bdb8bce26985-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.111966 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e48f1161-14d0-42c1-b6ac-bdb8bce26985","Type":"ContainerDied","Data":"f038c4bfb9b42aa2adb867b5ff99cb4b7376dfdced5df30a83c1787eabed4214"} Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.112043 4844 scope.go:117] "RemoveContainer" containerID="49224d76c481ef910732446c51b497a3bc7254c88cb8cd2720780911497c6963" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.112252 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.144630 4844 scope.go:117] "RemoveContainer" containerID="438ed061427135c543fb34c1f5a9679a2e6315a4f3935f61296d309523cd31e0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.151642 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.175630 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.185385 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 13:23:09 crc kubenswrapper[4844]: E0126 13:23:09.188022 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e48f1161-14d0-42c1-b6ac-bdb8bce26985" containerName="rabbitmq" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.188258 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e48f1161-14d0-42c1-b6ac-bdb8bce26985" containerName="rabbitmq" Jan 26 13:23:09 crc kubenswrapper[4844]: E0126 13:23:09.188378 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e48f1161-14d0-42c1-b6ac-bdb8bce26985" containerName="setup-container" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.188448 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e48f1161-14d0-42c1-b6ac-bdb8bce26985" containerName="setup-container" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.188869 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="e48f1161-14d0-42c1-b6ac-bdb8bce26985" containerName="rabbitmq" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.190368 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.192805 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4hbj2" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.196184 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.196302 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.196398 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.196509 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.196612 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.196754 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.199007 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.335679 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e48f1161-14d0-42c1-b6ac-bdb8bce26985" path="/var/lib/kubelet/pods/e48f1161-14d0-42c1-b6ac-bdb8bce26985/volumes" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.375462 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.375524 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.375544 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.375676 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmxxw\" (UniqueName: \"kubernetes.io/projected/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-kube-api-access-cmxxw\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.375761 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-config-data\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.375777 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-server-conf\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.375806 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-pod-info\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.375833 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.375965 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.376049 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.376154 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.477989 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.478378 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.478420 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.478435 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.478453 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmxxw\" (UniqueName: \"kubernetes.io/projected/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-kube-api-access-cmxxw\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.478451 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.480173 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.481451 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-config-data\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.481504 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-server-conf\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.481619 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-pod-info\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.481703 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.481829 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.481912 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.483364 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-server-conf\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.483748 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.483887 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.484078 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-config-data\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.489452 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.489849 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-pod-info\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.496087 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.499559 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.517476 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmxxw\" (UniqueName: \"kubernetes.io/projected/38e1fc4a-33a4-443e-95bb-3e653d3f1a59-kube-api-access-cmxxw\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.537722 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"38e1fc4a-33a4-443e-95bb-3e653d3f1a59\") " pod="openstack/rabbitmq-server-0" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.675440 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e8e36a62-9367-4c94-9aff-de8e6166af27" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 26 13:23:09 crc kubenswrapper[4844]: I0126 13:23:09.814538 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.124097 4844 generic.go:334] "Generic (PLEG): container finished" podID="e8e36a62-9367-4c94-9aff-de8e6166af27" containerID="2758d64ef9dfa428b02a999acaca19c0ab43f356ea26d72de994d5e96fc426e1" exitCode=0 Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.124447 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e8e36a62-9367-4c94-9aff-de8e6166af27","Type":"ContainerDied","Data":"2758d64ef9dfa428b02a999acaca19c0ab43f356ea26d72de994d5e96fc426e1"} Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.312387 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.477096 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.509313 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e8e36a62-9367-4c94-9aff-de8e6166af27-erlang-cookie-secret\") pod \"e8e36a62-9367-4c94-9aff-de8e6166af27\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.509386 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-plugins\") pod \"e8e36a62-9367-4c94-9aff-de8e6166af27\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.509501 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e8e36a62-9367-4c94-9aff-de8e6166af27-pod-info\") pod \"e8e36a62-9367-4c94-9aff-de8e6166af27\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.509612 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"e8e36a62-9367-4c94-9aff-de8e6166af27\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.509753 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xffks\" (UniqueName: \"kubernetes.io/projected/e8e36a62-9367-4c94-9aff-de8e6166af27-kube-api-access-xffks\") pod \"e8e36a62-9367-4c94-9aff-de8e6166af27\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.509819 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8e36a62-9367-4c94-9aff-de8e6166af27-config-data\") pod \"e8e36a62-9367-4c94-9aff-de8e6166af27\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.509862 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-erlang-cookie\") pod \"e8e36a62-9367-4c94-9aff-de8e6166af27\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.509933 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e8e36a62-9367-4c94-9aff-de8e6166af27-plugins-conf\") pod \"e8e36a62-9367-4c94-9aff-de8e6166af27\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.509989 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e8e36a62-9367-4c94-9aff-de8e6166af27-server-conf\") pod \"e8e36a62-9367-4c94-9aff-de8e6166af27\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.510030 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-confd\") pod \"e8e36a62-9367-4c94-9aff-de8e6166af27\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.510054 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-tls\") pod \"e8e36a62-9367-4c94-9aff-de8e6166af27\" (UID: \"e8e36a62-9367-4c94-9aff-de8e6166af27\") " Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.509855 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e8e36a62-9367-4c94-9aff-de8e6166af27" (UID: "e8e36a62-9367-4c94-9aff-de8e6166af27"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.510196 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e8e36a62-9367-4c94-9aff-de8e6166af27" (UID: "e8e36a62-9367-4c94-9aff-de8e6166af27"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.510403 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8e36a62-9367-4c94-9aff-de8e6166af27-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e8e36a62-9367-4c94-9aff-de8e6166af27" (UID: "e8e36a62-9367-4c94-9aff-de8e6166af27"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.510952 4844 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.510975 4844 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.510988 4844 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e8e36a62-9367-4c94-9aff-de8e6166af27-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.514254 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e8e36a62-9367-4c94-9aff-de8e6166af27" (UID: "e8e36a62-9367-4c94-9aff-de8e6166af27"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.515780 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8e36a62-9367-4c94-9aff-de8e6166af27-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e8e36a62-9367-4c94-9aff-de8e6166af27" (UID: "e8e36a62-9367-4c94-9aff-de8e6166af27"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.521103 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8e36a62-9367-4c94-9aff-de8e6166af27-kube-api-access-xffks" (OuterVolumeSpecName: "kube-api-access-xffks") pod "e8e36a62-9367-4c94-9aff-de8e6166af27" (UID: "e8e36a62-9367-4c94-9aff-de8e6166af27"). InnerVolumeSpecName "kube-api-access-xffks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.521209 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e8e36a62-9367-4c94-9aff-de8e6166af27-pod-info" (OuterVolumeSpecName: "pod-info") pod "e8e36a62-9367-4c94-9aff-de8e6166af27" (UID: "e8e36a62-9367-4c94-9aff-de8e6166af27"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.522191 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "e8e36a62-9367-4c94-9aff-de8e6166af27" (UID: "e8e36a62-9367-4c94-9aff-de8e6166af27"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.564281 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8e36a62-9367-4c94-9aff-de8e6166af27-config-data" (OuterVolumeSpecName: "config-data") pod "e8e36a62-9367-4c94-9aff-de8e6166af27" (UID: "e8e36a62-9367-4c94-9aff-de8e6166af27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.600559 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8e36a62-9367-4c94-9aff-de8e6166af27-server-conf" (OuterVolumeSpecName: "server-conf") pod "e8e36a62-9367-4c94-9aff-de8e6166af27" (UID: "e8e36a62-9367-4c94-9aff-de8e6166af27"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.613274 4844 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e8e36a62-9367-4c94-9aff-de8e6166af27-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.613305 4844 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.613317 4844 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e8e36a62-9367-4c94-9aff-de8e6166af27-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.613327 4844 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e8e36a62-9367-4c94-9aff-de8e6166af27-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.613348 4844 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.613358 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xffks\" (UniqueName: \"kubernetes.io/projected/e8e36a62-9367-4c94-9aff-de8e6166af27-kube-api-access-xffks\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.613367 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8e36a62-9367-4c94-9aff-de8e6166af27-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.639024 4844 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.653739 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e8e36a62-9367-4c94-9aff-de8e6166af27" (UID: "e8e36a62-9367-4c94-9aff-de8e6166af27"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.714808 4844 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e8e36a62-9367-4c94-9aff-de8e6166af27-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:10 crc kubenswrapper[4844]: I0126 13:23:10.714839 4844 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.136392 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e8e36a62-9367-4c94-9aff-de8e6166af27","Type":"ContainerDied","Data":"c6174e2ee6e8cf26deebd5aa8da5645beddd300ab6400a0ea5227615a329e3a1"} Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.136411 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.136818 4844 scope.go:117] "RemoveContainer" containerID="2758d64ef9dfa428b02a999acaca19c0ab43f356ea26d72de994d5e96fc426e1" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.137883 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"38e1fc4a-33a4-443e-95bb-3e653d3f1a59","Type":"ContainerStarted","Data":"5ef58d43cfff6dbba11e7d140c156b87176a1fa171cf0c3c376537e83a2a0d63"} Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.157704 4844 scope.go:117] "RemoveContainer" containerID="8037333977f59346e11bb0d4d8078b561374ca9115b317429eb3ea0e2a3fc400" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.178124 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.188859 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.213575 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 13:23:11 crc kubenswrapper[4844]: E0126 13:23:11.214116 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8e36a62-9367-4c94-9aff-de8e6166af27" containerName="setup-container" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.214139 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8e36a62-9367-4c94-9aff-de8e6166af27" containerName="setup-container" Jan 26 13:23:11 crc kubenswrapper[4844]: E0126 13:23:11.214160 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8e36a62-9367-4c94-9aff-de8e6166af27" containerName="rabbitmq" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.214168 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8e36a62-9367-4c94-9aff-de8e6166af27" containerName="rabbitmq" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.214444 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8e36a62-9367-4c94-9aff-de8e6166af27" containerName="rabbitmq" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.215764 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.221152 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.221199 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.221240 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.221501 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.221733 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.221909 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.223402 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qdtbn" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.227818 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.324922 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/463d25b4-7819-4947-925d-74c429093694-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.324985 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.325190 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/463d25b4-7819-4947-925d-74c429093694-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.325272 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/463d25b4-7819-4947-925d-74c429093694-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.325320 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/463d25b4-7819-4947-925d-74c429093694-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.325394 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/463d25b4-7819-4947-925d-74c429093694-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.325510 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/463d25b4-7819-4947-925d-74c429093694-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.325561 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/463d25b4-7819-4947-925d-74c429093694-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.325623 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z59xw\" (UniqueName: \"kubernetes.io/projected/463d25b4-7819-4947-925d-74c429093694-kube-api-access-z59xw\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.325717 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/463d25b4-7819-4947-925d-74c429093694-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.325809 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/463d25b4-7819-4947-925d-74c429093694-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.334774 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8e36a62-9367-4c94-9aff-de8e6166af27" path="/var/lib/kubelet/pods/e8e36a62-9367-4c94-9aff-de8e6166af27/volumes" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.427443 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/463d25b4-7819-4947-925d-74c429093694-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.427547 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/463d25b4-7819-4947-925d-74c429093694-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.427612 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/463d25b4-7819-4947-925d-74c429093694-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.427661 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.427731 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/463d25b4-7819-4947-925d-74c429093694-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.427773 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/463d25b4-7819-4947-925d-74c429093694-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.427801 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/463d25b4-7819-4947-925d-74c429093694-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.427862 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/463d25b4-7819-4947-925d-74c429093694-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.427926 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/463d25b4-7819-4947-925d-74c429093694-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.427954 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/463d25b4-7819-4947-925d-74c429093694-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.427991 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z59xw\" (UniqueName: \"kubernetes.io/projected/463d25b4-7819-4947-925d-74c429093694-kube-api-access-z59xw\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.428036 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/463d25b4-7819-4947-925d-74c429093694-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.429707 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/463d25b4-7819-4947-925d-74c429093694-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.432813 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/463d25b4-7819-4947-925d-74c429093694-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.433018 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/463d25b4-7819-4947-925d-74c429093694-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.433333 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/463d25b4-7819-4947-925d-74c429093694-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.433440 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/463d25b4-7819-4947-925d-74c429093694-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.433450 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/463d25b4-7819-4947-925d-74c429093694-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.433501 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.434669 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/463d25b4-7819-4947-925d-74c429093694-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.436192 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/463d25b4-7819-4947-925d-74c429093694-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.454928 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z59xw\" (UniqueName: \"kubernetes.io/projected/463d25b4-7819-4947-925d-74c429093694-kube-api-access-z59xw\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.564522 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"463d25b4-7819-4947-925d-74c429093694\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:11 crc kubenswrapper[4844]: I0126 13:23:11.854357 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:12 crc kubenswrapper[4844]: I0126 13:23:12.148901 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"38e1fc4a-33a4-443e-95bb-3e653d3f1a59","Type":"ContainerStarted","Data":"1c483aa09ff4fb3d4d53b9e21bc413d163edabf034a864d2735d267aa46bcd18"} Jan 26 13:23:12 crc kubenswrapper[4844]: I0126 13:23:12.303425 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 13:23:12 crc kubenswrapper[4844]: W0126 13:23:12.305919 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod463d25b4_7819_4947_925d_74c429093694.slice/crio-98cb0d675c95db4d1f84726d4f558d777775c03c2a93b230f824d570d8fc36b1 WatchSource:0}: Error finding container 98cb0d675c95db4d1f84726d4f558d777775c03c2a93b230f824d570d8fc36b1: Status 404 returned error can't find the container with id 98cb0d675c95db4d1f84726d4f558d777775c03c2a93b230f824d570d8fc36b1 Jan 26 13:23:13 crc kubenswrapper[4844]: I0126 13:23:13.160874 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"463d25b4-7819-4947-925d-74c429093694","Type":"ContainerStarted","Data":"98cb0d675c95db4d1f84726d4f558d777775c03c2a93b230f824d570d8fc36b1"} Jan 26 13:23:15 crc kubenswrapper[4844]: I0126 13:23:15.187716 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"463d25b4-7819-4947-925d-74c429093694","Type":"ContainerStarted","Data":"3d69018fceeea9252026ea2283c5f07a452562d595a6cc98a4e4a63d0097beb1"} Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.004745 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f6d9f48c5-fm2dq"] Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.006908 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.008788 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.032608 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6d9f48c5-fm2dq"] Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.102279 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-config\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.102358 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-dns-svc\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.102532 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-dns-swift-storage-0\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.102613 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-openstack-edpm-ipam\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.102686 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-ovsdbserver-nb\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.102891 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-ovsdbserver-sb\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.102969 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxcch\" (UniqueName: \"kubernetes.io/projected/544b4fcb-9e33-4e9d-a75d-4a48703084a8-kube-api-access-kxcch\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.204703 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-ovsdbserver-sb\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.204807 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxcch\" (UniqueName: \"kubernetes.io/projected/544b4fcb-9e33-4e9d-a75d-4a48703084a8-kube-api-access-kxcch\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.205287 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-config\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.205334 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-dns-svc\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.205378 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-dns-swift-storage-0\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.205411 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-openstack-edpm-ipam\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.205441 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-ovsdbserver-nb\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.205614 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-ovsdbserver-sb\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.206318 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-openstack-edpm-ipam\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.206381 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-dns-swift-storage-0\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.206588 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-dns-svc\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.207057 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-ovsdbserver-nb\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.207458 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-config\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.230683 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxcch\" (UniqueName: \"kubernetes.io/projected/544b4fcb-9e33-4e9d-a75d-4a48703084a8-kube-api-access-kxcch\") pod \"dnsmasq-dns-f6d9f48c5-fm2dq\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.312941 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:23:20 crc kubenswrapper[4844]: E0126 13:23:20.313530 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.326982 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:20 crc kubenswrapper[4844]: I0126 13:23:20.784440 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6d9f48c5-fm2dq"] Jan 26 13:23:21 crc kubenswrapper[4844]: I0126 13:23:21.253852 4844 generic.go:334] "Generic (PLEG): container finished" podID="544b4fcb-9e33-4e9d-a75d-4a48703084a8" containerID="fdabb2e88956090cbd85f11661adfe1cbcfa07e35d6820a13b09795455864443" exitCode=0 Jan 26 13:23:21 crc kubenswrapper[4844]: I0126 13:23:21.253893 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" event={"ID":"544b4fcb-9e33-4e9d-a75d-4a48703084a8","Type":"ContainerDied","Data":"fdabb2e88956090cbd85f11661adfe1cbcfa07e35d6820a13b09795455864443"} Jan 26 13:23:21 crc kubenswrapper[4844]: I0126 13:23:21.253917 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" event={"ID":"544b4fcb-9e33-4e9d-a75d-4a48703084a8","Type":"ContainerStarted","Data":"cc0611779be7216f1ba108243f43a1c6101fe753757d5f80da4c1fdbc87a74a5"} Jan 26 13:23:22 crc kubenswrapper[4844]: I0126 13:23:22.270164 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" event={"ID":"544b4fcb-9e33-4e9d-a75d-4a48703084a8","Type":"ContainerStarted","Data":"3cc359ea290c4a1ba3f4026d4e28a17fb1253b3a06d9147e6b61b211af705ac2"} Jan 26 13:23:22 crc kubenswrapper[4844]: I0126 13:23:22.270500 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:22 crc kubenswrapper[4844]: I0126 13:23:22.313949 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" podStartSLOduration=3.31392287 podStartE2EDuration="3.31392287s" podCreationTimestamp="2026-01-26 13:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:23:22.29783733 +0000 UTC m=+2379.231204942" watchObservedRunningTime="2026-01-26 13:23:22.31392287 +0000 UTC m=+2379.247290502" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.328760 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.398236 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cf597b77-57qsp"] Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.398501 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79cf597b77-57qsp" podUID="19f78d57-6253-4a29-8813-9dd30c3a3f86" containerName="dnsmasq-dns" containerID="cri-o://afd59bf86e1f85a288693413031160fed09539a3d118ca1e9e2aea9af5a44c3e" gracePeriod=10 Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.517750 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79cf597b77-57qsp" podUID="19f78d57-6253-4a29-8813-9dd30c3a3f86" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.221:5353: connect: connection refused" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.576659 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86587fb56f-wskms"] Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.591654 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.602870 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86587fb56f-wskms"] Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.750237 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxcws\" (UniqueName: \"kubernetes.io/projected/3ae83571-dfc8-4d58-bb40-b527756013e7-kube-api-access-fxcws\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.750343 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ae83571-dfc8-4d58-bb40-b527756013e7-dns-swift-storage-0\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.750414 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae83571-dfc8-4d58-bb40-b527756013e7-config\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.750434 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3ae83571-dfc8-4d58-bb40-b527756013e7-openstack-edpm-ipam\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.750515 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ae83571-dfc8-4d58-bb40-b527756013e7-dns-svc\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.750583 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ae83571-dfc8-4d58-bb40-b527756013e7-ovsdbserver-nb\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.750667 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ae83571-dfc8-4d58-bb40-b527756013e7-ovsdbserver-sb\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.853677 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ae83571-dfc8-4d58-bb40-b527756013e7-ovsdbserver-nb\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.853775 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ae83571-dfc8-4d58-bb40-b527756013e7-ovsdbserver-sb\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.853858 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxcws\" (UniqueName: \"kubernetes.io/projected/3ae83571-dfc8-4d58-bb40-b527756013e7-kube-api-access-fxcws\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.853942 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ae83571-dfc8-4d58-bb40-b527756013e7-dns-swift-storage-0\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.854058 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae83571-dfc8-4d58-bb40-b527756013e7-config\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.854098 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3ae83571-dfc8-4d58-bb40-b527756013e7-openstack-edpm-ipam\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.854129 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ae83571-dfc8-4d58-bb40-b527756013e7-dns-svc\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.855561 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ae83571-dfc8-4d58-bb40-b527756013e7-dns-svc\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.855695 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ae83571-dfc8-4d58-bb40-b527756013e7-ovsdbserver-nb\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.856528 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ae83571-dfc8-4d58-bb40-b527756013e7-dns-swift-storage-0\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.857237 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ae83571-dfc8-4d58-bb40-b527756013e7-ovsdbserver-sb\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.857420 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae83571-dfc8-4d58-bb40-b527756013e7-config\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.857955 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3ae83571-dfc8-4d58-bb40-b527756013e7-openstack-edpm-ipam\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.883261 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxcws\" (UniqueName: \"kubernetes.io/projected/3ae83571-dfc8-4d58-bb40-b527756013e7-kube-api-access-fxcws\") pod \"dnsmasq-dns-86587fb56f-wskms\" (UID: \"3ae83571-dfc8-4d58-bb40-b527756013e7\") " pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:30 crc kubenswrapper[4844]: I0126 13:23:30.921420 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.022210 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.162718 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-config\") pod \"19f78d57-6253-4a29-8813-9dd30c3a3f86\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.162759 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-ovsdbserver-sb\") pod \"19f78d57-6253-4a29-8813-9dd30c3a3f86\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.163001 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-ovsdbserver-nb\") pod \"19f78d57-6253-4a29-8813-9dd30c3a3f86\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.163029 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2btq\" (UniqueName: \"kubernetes.io/projected/19f78d57-6253-4a29-8813-9dd30c3a3f86-kube-api-access-h2btq\") pod \"19f78d57-6253-4a29-8813-9dd30c3a3f86\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.163080 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-dns-svc\") pod \"19f78d57-6253-4a29-8813-9dd30c3a3f86\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.163135 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-dns-swift-storage-0\") pod \"19f78d57-6253-4a29-8813-9dd30c3a3f86\" (UID: \"19f78d57-6253-4a29-8813-9dd30c3a3f86\") " Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.167523 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19f78d57-6253-4a29-8813-9dd30c3a3f86-kube-api-access-h2btq" (OuterVolumeSpecName: "kube-api-access-h2btq") pod "19f78d57-6253-4a29-8813-9dd30c3a3f86" (UID: "19f78d57-6253-4a29-8813-9dd30c3a3f86"). InnerVolumeSpecName "kube-api-access-h2btq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.250872 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "19f78d57-6253-4a29-8813-9dd30c3a3f86" (UID: "19f78d57-6253-4a29-8813-9dd30c3a3f86"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.254155 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "19f78d57-6253-4a29-8813-9dd30c3a3f86" (UID: "19f78d57-6253-4a29-8813-9dd30c3a3f86"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.263969 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "19f78d57-6253-4a29-8813-9dd30c3a3f86" (UID: "19f78d57-6253-4a29-8813-9dd30c3a3f86"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.265347 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.265372 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2btq\" (UniqueName: \"kubernetes.io/projected/19f78d57-6253-4a29-8813-9dd30c3a3f86-kube-api-access-h2btq\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.265384 4844 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.265392 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.269276 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "19f78d57-6253-4a29-8813-9dd30c3a3f86" (UID: "19f78d57-6253-4a29-8813-9dd30c3a3f86"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.287212 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-config" (OuterVolumeSpecName: "config") pod "19f78d57-6253-4a29-8813-9dd30c3a3f86" (UID: "19f78d57-6253-4a29-8813-9dd30c3a3f86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.366944 4844 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.366974 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19f78d57-6253-4a29-8813-9dd30c3a3f86-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.370095 4844 generic.go:334] "Generic (PLEG): container finished" podID="19f78d57-6253-4a29-8813-9dd30c3a3f86" containerID="afd59bf86e1f85a288693413031160fed09539a3d118ca1e9e2aea9af5a44c3e" exitCode=0 Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.370144 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cf597b77-57qsp" event={"ID":"19f78d57-6253-4a29-8813-9dd30c3a3f86","Type":"ContainerDied","Data":"afd59bf86e1f85a288693413031160fed09539a3d118ca1e9e2aea9af5a44c3e"} Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.370192 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cf597b77-57qsp" event={"ID":"19f78d57-6253-4a29-8813-9dd30c3a3f86","Type":"ContainerDied","Data":"c7fa3a55a69b80862ab463a9bc2367f217b0bc5c52cf0df6423f6f2144b04365"} Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.370195 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cf597b77-57qsp" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.370213 4844 scope.go:117] "RemoveContainer" containerID="afd59bf86e1f85a288693413031160fed09539a3d118ca1e9e2aea9af5a44c3e" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.398184 4844 scope.go:117] "RemoveContainer" containerID="b1da75ac10c2c9b81a86b96b80aab62885a94c2dbea2251c84f0907b8747f21b" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.411408 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cf597b77-57qsp"] Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.420455 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79cf597b77-57qsp"] Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.429540 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86587fb56f-wskms"] Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.438629 4844 scope.go:117] "RemoveContainer" containerID="afd59bf86e1f85a288693413031160fed09539a3d118ca1e9e2aea9af5a44c3e" Jan 26 13:23:31 crc kubenswrapper[4844]: E0126 13:23:31.439341 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afd59bf86e1f85a288693413031160fed09539a3d118ca1e9e2aea9af5a44c3e\": container with ID starting with afd59bf86e1f85a288693413031160fed09539a3d118ca1e9e2aea9af5a44c3e not found: ID does not exist" containerID="afd59bf86e1f85a288693413031160fed09539a3d118ca1e9e2aea9af5a44c3e" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.439409 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afd59bf86e1f85a288693413031160fed09539a3d118ca1e9e2aea9af5a44c3e"} err="failed to get container status \"afd59bf86e1f85a288693413031160fed09539a3d118ca1e9e2aea9af5a44c3e\": rpc error: code = NotFound desc = could not find container \"afd59bf86e1f85a288693413031160fed09539a3d118ca1e9e2aea9af5a44c3e\": container with ID starting with afd59bf86e1f85a288693413031160fed09539a3d118ca1e9e2aea9af5a44c3e not found: ID does not exist" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.439471 4844 scope.go:117] "RemoveContainer" containerID="b1da75ac10c2c9b81a86b96b80aab62885a94c2dbea2251c84f0907b8747f21b" Jan 26 13:23:31 crc kubenswrapper[4844]: E0126 13:23:31.440117 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1da75ac10c2c9b81a86b96b80aab62885a94c2dbea2251c84f0907b8747f21b\": container with ID starting with b1da75ac10c2c9b81a86b96b80aab62885a94c2dbea2251c84f0907b8747f21b not found: ID does not exist" containerID="b1da75ac10c2c9b81a86b96b80aab62885a94c2dbea2251c84f0907b8747f21b" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.440289 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1da75ac10c2c9b81a86b96b80aab62885a94c2dbea2251c84f0907b8747f21b"} err="failed to get container status \"b1da75ac10c2c9b81a86b96b80aab62885a94c2dbea2251c84f0907b8747f21b\": rpc error: code = NotFound desc = could not find container \"b1da75ac10c2c9b81a86b96b80aab62885a94c2dbea2251c84f0907b8747f21b\": container with ID starting with b1da75ac10c2c9b81a86b96b80aab62885a94c2dbea2251c84f0907b8747f21b not found: ID does not exist" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.618832 4844 scope.go:117] "RemoveContainer" containerID="f7e3cc9c08e0881f89f24682031a154c4b9f31edf9d85e7b83810a3951f774d4" Jan 26 13:23:31 crc kubenswrapper[4844]: I0126 13:23:31.641182 4844 scope.go:117] "RemoveContainer" containerID="ccc61abf034a4abb38fa7032c712fd040deb21601353a65ea423ddc22c6b9661" Jan 26 13:23:32 crc kubenswrapper[4844]: I0126 13:23:32.380991 4844 generic.go:334] "Generic (PLEG): container finished" podID="3ae83571-dfc8-4d58-bb40-b527756013e7" containerID="0059e9a277892a23b4e559016966fd2c1f5eee300b9173396c4438f1ca4592fc" exitCode=0 Jan 26 13:23:32 crc kubenswrapper[4844]: I0126 13:23:32.381043 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86587fb56f-wskms" event={"ID":"3ae83571-dfc8-4d58-bb40-b527756013e7","Type":"ContainerDied","Data":"0059e9a277892a23b4e559016966fd2c1f5eee300b9173396c4438f1ca4592fc"} Jan 26 13:23:32 crc kubenswrapper[4844]: I0126 13:23:32.381343 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86587fb56f-wskms" event={"ID":"3ae83571-dfc8-4d58-bb40-b527756013e7","Type":"ContainerStarted","Data":"f3bf94b01fe866501233235fc15334e33250c1dfe5578060fd65ad2e6b041e44"} Jan 26 13:23:33 crc kubenswrapper[4844]: I0126 13:23:33.323661 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19f78d57-6253-4a29-8813-9dd30c3a3f86" path="/var/lib/kubelet/pods/19f78d57-6253-4a29-8813-9dd30c3a3f86/volumes" Jan 26 13:23:33 crc kubenswrapper[4844]: I0126 13:23:33.390748 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86587fb56f-wskms" event={"ID":"3ae83571-dfc8-4d58-bb40-b527756013e7","Type":"ContainerStarted","Data":"c00cc4c3a163db84844b480a160cf50eee61f94d7e5608265df31b3a5343b1f9"} Jan 26 13:23:33 crc kubenswrapper[4844]: I0126 13:23:33.391466 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:33 crc kubenswrapper[4844]: I0126 13:23:33.427268 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86587fb56f-wskms" podStartSLOduration=3.427248528 podStartE2EDuration="3.427248528s" podCreationTimestamp="2026-01-26 13:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:23:33.415528804 +0000 UTC m=+2390.348896426" watchObservedRunningTime="2026-01-26 13:23:33.427248528 +0000 UTC m=+2390.360616150" Jan 26 13:23:35 crc kubenswrapper[4844]: I0126 13:23:35.313876 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:23:35 crc kubenswrapper[4844]: E0126 13:23:35.314442 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:23:40 crc kubenswrapper[4844]: I0126 13:23:40.924042 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86587fb56f-wskms" Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.007288 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6d9f48c5-fm2dq"] Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.007619 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" podUID="544b4fcb-9e33-4e9d-a75d-4a48703084a8" containerName="dnsmasq-dns" containerID="cri-o://3cc359ea290c4a1ba3f4026d4e28a17fb1253b3a06d9147e6b61b211af705ac2" gracePeriod=10 Jan 26 13:23:41 crc kubenswrapper[4844]: E0126 13:23:41.274557 4844 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod544b4fcb_9e33_4e9d_a75d_4a48703084a8.slice/crio-3cc359ea290c4a1ba3f4026d4e28a17fb1253b3a06d9147e6b61b211af705ac2.scope\": RecentStats: unable to find data in memory cache]" Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.547830 4844 generic.go:334] "Generic (PLEG): container finished" podID="544b4fcb-9e33-4e9d-a75d-4a48703084a8" containerID="3cc359ea290c4a1ba3f4026d4e28a17fb1253b3a06d9147e6b61b211af705ac2" exitCode=0 Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.547881 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" event={"ID":"544b4fcb-9e33-4e9d-a75d-4a48703084a8","Type":"ContainerDied","Data":"3cc359ea290c4a1ba3f4026d4e28a17fb1253b3a06d9147e6b61b211af705ac2"} Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.547906 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" event={"ID":"544b4fcb-9e33-4e9d-a75d-4a48703084a8","Type":"ContainerDied","Data":"cc0611779be7216f1ba108243f43a1c6101fe753757d5f80da4c1fdbc87a74a5"} Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.547917 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc0611779be7216f1ba108243f43a1c6101fe753757d5f80da4c1fdbc87a74a5" Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.572345 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.710003 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-ovsdbserver-nb\") pod \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.710279 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-config\") pod \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.710323 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-openstack-edpm-ipam\") pod \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.710348 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxcch\" (UniqueName: \"kubernetes.io/projected/544b4fcb-9e33-4e9d-a75d-4a48703084a8-kube-api-access-kxcch\") pod \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.710373 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-ovsdbserver-sb\") pod \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.710409 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-dns-svc\") pod \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.710472 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-dns-swift-storage-0\") pod \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\" (UID: \"544b4fcb-9e33-4e9d-a75d-4a48703084a8\") " Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.716323 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/544b4fcb-9e33-4e9d-a75d-4a48703084a8-kube-api-access-kxcch" (OuterVolumeSpecName: "kube-api-access-kxcch") pod "544b4fcb-9e33-4e9d-a75d-4a48703084a8" (UID: "544b4fcb-9e33-4e9d-a75d-4a48703084a8"). InnerVolumeSpecName "kube-api-access-kxcch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.763737 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "544b4fcb-9e33-4e9d-a75d-4a48703084a8" (UID: "544b4fcb-9e33-4e9d-a75d-4a48703084a8"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.765490 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "544b4fcb-9e33-4e9d-a75d-4a48703084a8" (UID: "544b4fcb-9e33-4e9d-a75d-4a48703084a8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.774683 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "544b4fcb-9e33-4e9d-a75d-4a48703084a8" (UID: "544b4fcb-9e33-4e9d-a75d-4a48703084a8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.782130 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "544b4fcb-9e33-4e9d-a75d-4a48703084a8" (UID: "544b4fcb-9e33-4e9d-a75d-4a48703084a8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.782626 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-config" (OuterVolumeSpecName: "config") pod "544b4fcb-9e33-4e9d-a75d-4a48703084a8" (UID: "544b4fcb-9e33-4e9d-a75d-4a48703084a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.791875 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "544b4fcb-9e33-4e9d-a75d-4a48703084a8" (UID: "544b4fcb-9e33-4e9d-a75d-4a48703084a8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.812563 4844 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.812607 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxcch\" (UniqueName: \"kubernetes.io/projected/544b4fcb-9e33-4e9d-a75d-4a48703084a8-kube-api-access-kxcch\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.812618 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.812627 4844 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.812636 4844 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.812646 4844 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:41 crc kubenswrapper[4844]: I0126 13:23:41.812654 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/544b4fcb-9e33-4e9d-a75d-4a48703084a8-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:23:42 crc kubenswrapper[4844]: I0126 13:23:42.557066 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6d9f48c5-fm2dq" Jan 26 13:23:42 crc kubenswrapper[4844]: I0126 13:23:42.594855 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6d9f48c5-fm2dq"] Jan 26 13:23:42 crc kubenswrapper[4844]: I0126 13:23:42.612850 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f6d9f48c5-fm2dq"] Jan 26 13:23:43 crc kubenswrapper[4844]: I0126 13:23:43.326858 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="544b4fcb-9e33-4e9d-a75d-4a48703084a8" path="/var/lib/kubelet/pods/544b4fcb-9e33-4e9d-a75d-4a48703084a8/volumes" Jan 26 13:23:45 crc kubenswrapper[4844]: I0126 13:23:45.594803 4844 generic.go:334] "Generic (PLEG): container finished" podID="38e1fc4a-33a4-443e-95bb-3e653d3f1a59" containerID="1c483aa09ff4fb3d4d53b9e21bc413d163edabf034a864d2735d267aa46bcd18" exitCode=0 Jan 26 13:23:45 crc kubenswrapper[4844]: I0126 13:23:45.594858 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"38e1fc4a-33a4-443e-95bb-3e653d3f1a59","Type":"ContainerDied","Data":"1c483aa09ff4fb3d4d53b9e21bc413d163edabf034a864d2735d267aa46bcd18"} Jan 26 13:23:47 crc kubenswrapper[4844]: I0126 13:23:47.636273 4844 generic.go:334] "Generic (PLEG): container finished" podID="463d25b4-7819-4947-925d-74c429093694" containerID="3d69018fceeea9252026ea2283c5f07a452562d595a6cc98a4e4a63d0097beb1" exitCode=0 Jan 26 13:23:47 crc kubenswrapper[4844]: I0126 13:23:47.636322 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"463d25b4-7819-4947-925d-74c429093694","Type":"ContainerDied","Data":"3d69018fceeea9252026ea2283c5f07a452562d595a6cc98a4e4a63d0097beb1"} Jan 26 13:23:47 crc kubenswrapper[4844]: I0126 13:23:47.642761 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"38e1fc4a-33a4-443e-95bb-3e653d3f1a59","Type":"ContainerStarted","Data":"138bc677ede1b03e8ea2de823864f650ef24bb197c00cdf75d619e8cd8b44a0e"} Jan 26 13:23:49 crc kubenswrapper[4844]: I0126 13:23:49.313482 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:23:49 crc kubenswrapper[4844]: E0126 13:23:49.314284 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:23:49 crc kubenswrapper[4844]: I0126 13:23:49.668263 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"463d25b4-7819-4947-925d-74c429093694","Type":"ContainerStarted","Data":"7c687377b533f5bfa9d790c3036b90c9935548da2f8fe70d3e0322a29d9ceb15"} Jan 26 13:23:49 crc kubenswrapper[4844]: I0126 13:23:49.668366 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 13:23:49 crc kubenswrapper[4844]: I0126 13:23:49.668622 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:23:49 crc kubenswrapper[4844]: I0126 13:23:49.706023 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.706004289 podStartE2EDuration="40.706004289s" podCreationTimestamp="2026-01-26 13:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:23:49.698396355 +0000 UTC m=+2406.631763987" watchObservedRunningTime="2026-01-26 13:23:49.706004289 +0000 UTC m=+2406.639371921" Jan 26 13:23:49 crc kubenswrapper[4844]: I0126 13:23:49.747586 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.747564754 podStartE2EDuration="38.747564754s" podCreationTimestamp="2026-01-26 13:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:23:49.729925128 +0000 UTC m=+2406.663292740" watchObservedRunningTime="2026-01-26 13:23:49.747564754 +0000 UTC m=+2406.680932366" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.007787 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn"] Jan 26 13:23:59 crc kubenswrapper[4844]: E0126 13:23:59.008769 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19f78d57-6253-4a29-8813-9dd30c3a3f86" containerName="init" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.008784 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="19f78d57-6253-4a29-8813-9dd30c3a3f86" containerName="init" Jan 26 13:23:59 crc kubenswrapper[4844]: E0126 13:23:59.008803 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544b4fcb-9e33-4e9d-a75d-4a48703084a8" containerName="dnsmasq-dns" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.008809 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="544b4fcb-9e33-4e9d-a75d-4a48703084a8" containerName="dnsmasq-dns" Jan 26 13:23:59 crc kubenswrapper[4844]: E0126 13:23:59.008822 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19f78d57-6253-4a29-8813-9dd30c3a3f86" containerName="dnsmasq-dns" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.008828 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="19f78d57-6253-4a29-8813-9dd30c3a3f86" containerName="dnsmasq-dns" Jan 26 13:23:59 crc kubenswrapper[4844]: E0126 13:23:59.008838 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544b4fcb-9e33-4e9d-a75d-4a48703084a8" containerName="init" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.008844 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="544b4fcb-9e33-4e9d-a75d-4a48703084a8" containerName="init" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.009019 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="19f78d57-6253-4a29-8813-9dd30c3a3f86" containerName="dnsmasq-dns" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.009057 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="544b4fcb-9e33-4e9d-a75d-4a48703084a8" containerName="dnsmasq-dns" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.009787 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.012884 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.012884 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.017246 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.018930 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r4j2z" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.028835 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn"] Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.082981 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d135fda9-894e-41c5-94a3-57aca842c386-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn\" (UID: \"d135fda9-894e-41c5-94a3-57aca842c386\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.083044 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d135fda9-894e-41c5-94a3-57aca842c386-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn\" (UID: \"d135fda9-894e-41c5-94a3-57aca842c386\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.083095 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t74gl\" (UniqueName: \"kubernetes.io/projected/d135fda9-894e-41c5-94a3-57aca842c386-kube-api-access-t74gl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn\" (UID: \"d135fda9-894e-41c5-94a3-57aca842c386\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.083160 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d135fda9-894e-41c5-94a3-57aca842c386-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn\" (UID: \"d135fda9-894e-41c5-94a3-57aca842c386\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.185152 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d135fda9-894e-41c5-94a3-57aca842c386-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn\" (UID: \"d135fda9-894e-41c5-94a3-57aca842c386\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.185242 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d135fda9-894e-41c5-94a3-57aca842c386-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn\" (UID: \"d135fda9-894e-41c5-94a3-57aca842c386\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.185268 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t74gl\" (UniqueName: \"kubernetes.io/projected/d135fda9-894e-41c5-94a3-57aca842c386-kube-api-access-t74gl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn\" (UID: \"d135fda9-894e-41c5-94a3-57aca842c386\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.185511 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d135fda9-894e-41c5-94a3-57aca842c386-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn\" (UID: \"d135fda9-894e-41c5-94a3-57aca842c386\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.192220 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d135fda9-894e-41c5-94a3-57aca842c386-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn\" (UID: \"d135fda9-894e-41c5-94a3-57aca842c386\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.193333 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d135fda9-894e-41c5-94a3-57aca842c386-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn\" (UID: \"d135fda9-894e-41c5-94a3-57aca842c386\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.196781 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d135fda9-894e-41c5-94a3-57aca842c386-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn\" (UID: \"d135fda9-894e-41c5-94a3-57aca842c386\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.205977 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t74gl\" (UniqueName: \"kubernetes.io/projected/d135fda9-894e-41c5-94a3-57aca842c386-kube-api-access-t74gl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn\" (UID: \"d135fda9-894e-41c5-94a3-57aca842c386\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.330936 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.816932 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="38e1fc4a-33a4-443e-95bb-3e653d3f1a59" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.228:5671: connect: connection refused" Jan 26 13:23:59 crc kubenswrapper[4844]: I0126 13:23:59.817957 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn"] Jan 26 13:24:00 crc kubenswrapper[4844]: I0126 13:24:00.769418 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" event={"ID":"d135fda9-894e-41c5-94a3-57aca842c386","Type":"ContainerStarted","Data":"26a9bd1cfcfc2645c3ca79d927bbcebf5a45830ca22c370c59e5114e78475b8c"} Jan 26 13:24:01 crc kubenswrapper[4844]: I0126 13:24:01.856801 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 13:24:02 crc kubenswrapper[4844]: I0126 13:24:02.314198 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:24:02 crc kubenswrapper[4844]: E0126 13:24:02.314938 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:24:09 crc kubenswrapper[4844]: I0126 13:24:09.815807 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 13:24:12 crc kubenswrapper[4844]: I0126 13:24:12.911898 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" event={"ID":"d135fda9-894e-41c5-94a3-57aca842c386","Type":"ContainerStarted","Data":"8ec6925f6ca41ee55dc40f6b24cf0af40d7163a0e186106a9423a8d0ed9c4157"} Jan 26 13:24:12 crc kubenswrapper[4844]: I0126 13:24:12.930938 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" podStartSLOduration=3.02815299 podStartE2EDuration="14.930918977s" podCreationTimestamp="2026-01-26 13:23:58 +0000 UTC" firstStartedPulling="2026-01-26 13:23:59.819727553 +0000 UTC m=+2416.753095165" lastFinishedPulling="2026-01-26 13:24:11.72249354 +0000 UTC m=+2428.655861152" observedRunningTime="2026-01-26 13:24:12.928108909 +0000 UTC m=+2429.861476521" watchObservedRunningTime="2026-01-26 13:24:12.930918977 +0000 UTC m=+2429.864286589" Jan 26 13:24:17 crc kubenswrapper[4844]: I0126 13:24:17.313587 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:24:17 crc kubenswrapper[4844]: E0126 13:24:17.314433 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:24:24 crc kubenswrapper[4844]: I0126 13:24:24.033023 4844 generic.go:334] "Generic (PLEG): container finished" podID="d135fda9-894e-41c5-94a3-57aca842c386" containerID="8ec6925f6ca41ee55dc40f6b24cf0af40d7163a0e186106a9423a8d0ed9c4157" exitCode=0 Jan 26 13:24:24 crc kubenswrapper[4844]: I0126 13:24:24.033095 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" event={"ID":"d135fda9-894e-41c5-94a3-57aca842c386","Type":"ContainerDied","Data":"8ec6925f6ca41ee55dc40f6b24cf0af40d7163a0e186106a9423a8d0ed9c4157"} Jan 26 13:24:25 crc kubenswrapper[4844]: I0126 13:24:25.505504 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" Jan 26 13:24:25 crc kubenswrapper[4844]: I0126 13:24:25.631650 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t74gl\" (UniqueName: \"kubernetes.io/projected/d135fda9-894e-41c5-94a3-57aca842c386-kube-api-access-t74gl\") pod \"d135fda9-894e-41c5-94a3-57aca842c386\" (UID: \"d135fda9-894e-41c5-94a3-57aca842c386\") " Jan 26 13:24:25 crc kubenswrapper[4844]: I0126 13:24:25.631711 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d135fda9-894e-41c5-94a3-57aca842c386-inventory\") pod \"d135fda9-894e-41c5-94a3-57aca842c386\" (UID: \"d135fda9-894e-41c5-94a3-57aca842c386\") " Jan 26 13:24:25 crc kubenswrapper[4844]: I0126 13:24:25.631748 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d135fda9-894e-41c5-94a3-57aca842c386-repo-setup-combined-ca-bundle\") pod \"d135fda9-894e-41c5-94a3-57aca842c386\" (UID: \"d135fda9-894e-41c5-94a3-57aca842c386\") " Jan 26 13:24:25 crc kubenswrapper[4844]: I0126 13:24:25.631899 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d135fda9-894e-41c5-94a3-57aca842c386-ssh-key-openstack-edpm-ipam\") pod \"d135fda9-894e-41c5-94a3-57aca842c386\" (UID: \"d135fda9-894e-41c5-94a3-57aca842c386\") " Jan 26 13:24:25 crc kubenswrapper[4844]: I0126 13:24:25.637546 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d135fda9-894e-41c5-94a3-57aca842c386-kube-api-access-t74gl" (OuterVolumeSpecName: "kube-api-access-t74gl") pod "d135fda9-894e-41c5-94a3-57aca842c386" (UID: "d135fda9-894e-41c5-94a3-57aca842c386"). InnerVolumeSpecName "kube-api-access-t74gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:24:25 crc kubenswrapper[4844]: I0126 13:24:25.639665 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d135fda9-894e-41c5-94a3-57aca842c386-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d135fda9-894e-41c5-94a3-57aca842c386" (UID: "d135fda9-894e-41c5-94a3-57aca842c386"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:24:25 crc kubenswrapper[4844]: I0126 13:24:25.660516 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d135fda9-894e-41c5-94a3-57aca842c386-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d135fda9-894e-41c5-94a3-57aca842c386" (UID: "d135fda9-894e-41c5-94a3-57aca842c386"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:24:25 crc kubenswrapper[4844]: I0126 13:24:25.660580 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d135fda9-894e-41c5-94a3-57aca842c386-inventory" (OuterVolumeSpecName: "inventory") pod "d135fda9-894e-41c5-94a3-57aca842c386" (UID: "d135fda9-894e-41c5-94a3-57aca842c386"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:24:25 crc kubenswrapper[4844]: I0126 13:24:25.734298 4844 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d135fda9-894e-41c5-94a3-57aca842c386-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:24:25 crc kubenswrapper[4844]: I0126 13:24:25.734347 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t74gl\" (UniqueName: \"kubernetes.io/projected/d135fda9-894e-41c5-94a3-57aca842c386-kube-api-access-t74gl\") on node \"crc\" DevicePath \"\"" Jan 26 13:24:25 crc kubenswrapper[4844]: I0126 13:24:25.734363 4844 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d135fda9-894e-41c5-94a3-57aca842c386-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:24:25 crc kubenswrapper[4844]: I0126 13:24:25.734377 4844 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d135fda9-894e-41c5-94a3-57aca842c386-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.057190 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" event={"ID":"d135fda9-894e-41c5-94a3-57aca842c386","Type":"ContainerDied","Data":"26a9bd1cfcfc2645c3ca79d927bbcebf5a45830ca22c370c59e5114e78475b8c"} Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.057237 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26a9bd1cfcfc2645c3ca79d927bbcebf5a45830ca22c370c59e5114e78475b8c" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.057250 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.133099 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd"] Jan 26 13:24:26 crc kubenswrapper[4844]: E0126 13:24:26.133641 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d135fda9-894e-41c5-94a3-57aca842c386" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.133668 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d135fda9-894e-41c5-94a3-57aca842c386" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.133927 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="d135fda9-894e-41c5-94a3-57aca842c386" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.134786 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.137082 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.137339 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r4j2z" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.137532 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.137824 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.142967 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd"] Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.243939 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e02f083a-8dcb-4454-8050-752c996dadd7-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4z6gd\" (UID: \"e02f083a-8dcb-4454-8050-752c996dadd7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.244012 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e02f083a-8dcb-4454-8050-752c996dadd7-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4z6gd\" (UID: \"e02f083a-8dcb-4454-8050-752c996dadd7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.244396 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hgvb\" (UniqueName: \"kubernetes.io/projected/e02f083a-8dcb-4454-8050-752c996dadd7-kube-api-access-5hgvb\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4z6gd\" (UID: \"e02f083a-8dcb-4454-8050-752c996dadd7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.346163 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hgvb\" (UniqueName: \"kubernetes.io/projected/e02f083a-8dcb-4454-8050-752c996dadd7-kube-api-access-5hgvb\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4z6gd\" (UID: \"e02f083a-8dcb-4454-8050-752c996dadd7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.346248 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e02f083a-8dcb-4454-8050-752c996dadd7-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4z6gd\" (UID: \"e02f083a-8dcb-4454-8050-752c996dadd7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.346293 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e02f083a-8dcb-4454-8050-752c996dadd7-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4z6gd\" (UID: \"e02f083a-8dcb-4454-8050-752c996dadd7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.350941 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e02f083a-8dcb-4454-8050-752c996dadd7-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4z6gd\" (UID: \"e02f083a-8dcb-4454-8050-752c996dadd7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.352892 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e02f083a-8dcb-4454-8050-752c996dadd7-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4z6gd\" (UID: \"e02f083a-8dcb-4454-8050-752c996dadd7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.363465 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hgvb\" (UniqueName: \"kubernetes.io/projected/e02f083a-8dcb-4454-8050-752c996dadd7-kube-api-access-5hgvb\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4z6gd\" (UID: \"e02f083a-8dcb-4454-8050-752c996dadd7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd" Jan 26 13:24:26 crc kubenswrapper[4844]: I0126 13:24:26.457558 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd" Jan 26 13:24:27 crc kubenswrapper[4844]: I0126 13:24:27.030242 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd"] Jan 26 13:24:27 crc kubenswrapper[4844]: I0126 13:24:27.068856 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd" event={"ID":"e02f083a-8dcb-4454-8050-752c996dadd7","Type":"ContainerStarted","Data":"1e0735af0d60000655b26353decbb2bb29fb3915d266024467a44b2d149672e1"} Jan 26 13:24:29 crc kubenswrapper[4844]: I0126 13:24:29.096454 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd" event={"ID":"e02f083a-8dcb-4454-8050-752c996dadd7","Type":"ContainerStarted","Data":"5a2a43b97c071a9a56da1c3013a0a2b3d0bc9258c6f34f4b4bb60c515836efb0"} Jan 26 13:24:29 crc kubenswrapper[4844]: I0126 13:24:29.117167 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd" podStartSLOduration=1.356913734 podStartE2EDuration="3.117148511s" podCreationTimestamp="2026-01-26 13:24:26 +0000 UTC" firstStartedPulling="2026-01-26 13:24:27.034278117 +0000 UTC m=+2443.967645729" lastFinishedPulling="2026-01-26 13:24:28.794512894 +0000 UTC m=+2445.727880506" observedRunningTime="2026-01-26 13:24:29.111572546 +0000 UTC m=+2446.044940168" watchObservedRunningTime="2026-01-26 13:24:29.117148511 +0000 UTC m=+2446.050516123" Jan 26 13:24:29 crc kubenswrapper[4844]: I0126 13:24:29.312961 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:24:29 crc kubenswrapper[4844]: E0126 13:24:29.313522 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:24:31 crc kubenswrapper[4844]: I0126 13:24:31.815525 4844 scope.go:117] "RemoveContainer" containerID="bd32517abd4acb8935f148381ac2fddb1286aab021f5e612d6dcb8e9b83e200d" Jan 26 13:24:31 crc kubenswrapper[4844]: I0126 13:24:31.842017 4844 scope.go:117] "RemoveContainer" containerID="48c908f51c718cfc35dcf190e6e8b770e5bf2784368ebc0fa2fc41dd8c86f055" Jan 26 13:24:32 crc kubenswrapper[4844]: I0126 13:24:32.142569 4844 generic.go:334] "Generic (PLEG): container finished" podID="e02f083a-8dcb-4454-8050-752c996dadd7" containerID="5a2a43b97c071a9a56da1c3013a0a2b3d0bc9258c6f34f4b4bb60c515836efb0" exitCode=0 Jan 26 13:24:32 crc kubenswrapper[4844]: I0126 13:24:32.142699 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd" event={"ID":"e02f083a-8dcb-4454-8050-752c996dadd7","Type":"ContainerDied","Data":"5a2a43b97c071a9a56da1c3013a0a2b3d0bc9258c6f34f4b4bb60c515836efb0"} Jan 26 13:24:33 crc kubenswrapper[4844]: I0126 13:24:33.565521 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd" Jan 26 13:24:33 crc kubenswrapper[4844]: I0126 13:24:33.598979 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e02f083a-8dcb-4454-8050-752c996dadd7-inventory\") pod \"e02f083a-8dcb-4454-8050-752c996dadd7\" (UID: \"e02f083a-8dcb-4454-8050-752c996dadd7\") " Jan 26 13:24:33 crc kubenswrapper[4844]: I0126 13:24:33.599367 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e02f083a-8dcb-4454-8050-752c996dadd7-ssh-key-openstack-edpm-ipam\") pod \"e02f083a-8dcb-4454-8050-752c996dadd7\" (UID: \"e02f083a-8dcb-4454-8050-752c996dadd7\") " Jan 26 13:24:33 crc kubenswrapper[4844]: I0126 13:24:33.599472 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hgvb\" (UniqueName: \"kubernetes.io/projected/e02f083a-8dcb-4454-8050-752c996dadd7-kube-api-access-5hgvb\") pod \"e02f083a-8dcb-4454-8050-752c996dadd7\" (UID: \"e02f083a-8dcb-4454-8050-752c996dadd7\") " Jan 26 13:24:33 crc kubenswrapper[4844]: I0126 13:24:33.611969 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e02f083a-8dcb-4454-8050-752c996dadd7-kube-api-access-5hgvb" (OuterVolumeSpecName: "kube-api-access-5hgvb") pod "e02f083a-8dcb-4454-8050-752c996dadd7" (UID: "e02f083a-8dcb-4454-8050-752c996dadd7"). InnerVolumeSpecName "kube-api-access-5hgvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:24:33 crc kubenswrapper[4844]: I0126 13:24:33.633179 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e02f083a-8dcb-4454-8050-752c996dadd7-inventory" (OuterVolumeSpecName: "inventory") pod "e02f083a-8dcb-4454-8050-752c996dadd7" (UID: "e02f083a-8dcb-4454-8050-752c996dadd7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:24:33 crc kubenswrapper[4844]: I0126 13:24:33.633453 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e02f083a-8dcb-4454-8050-752c996dadd7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e02f083a-8dcb-4454-8050-752c996dadd7" (UID: "e02f083a-8dcb-4454-8050-752c996dadd7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:24:33 crc kubenswrapper[4844]: I0126 13:24:33.701437 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hgvb\" (UniqueName: \"kubernetes.io/projected/e02f083a-8dcb-4454-8050-752c996dadd7-kube-api-access-5hgvb\") on node \"crc\" DevicePath \"\"" Jan 26 13:24:33 crc kubenswrapper[4844]: I0126 13:24:33.701485 4844 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e02f083a-8dcb-4454-8050-752c996dadd7-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:24:33 crc kubenswrapper[4844]: I0126 13:24:33.701501 4844 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e02f083a-8dcb-4454-8050-752c996dadd7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.161157 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd" event={"ID":"e02f083a-8dcb-4454-8050-752c996dadd7","Type":"ContainerDied","Data":"1e0735af0d60000655b26353decbb2bb29fb3915d266024467a44b2d149672e1"} Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.161198 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e0735af0d60000655b26353decbb2bb29fb3915d266024467a44b2d149672e1" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.161253 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4z6gd" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.225794 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79"] Jan 26 13:24:34 crc kubenswrapper[4844]: E0126 13:24:34.226868 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02f083a-8dcb-4454-8050-752c996dadd7" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.226965 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02f083a-8dcb-4454-8050-752c996dadd7" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.227315 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="e02f083a-8dcb-4454-8050-752c996dadd7" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.228257 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.232330 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.232848 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.233301 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.233479 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r4j2z" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.240278 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79"] Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.314619 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1079155-3798-4f39-ab56-dffea2038df8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-88p79\" (UID: \"c1079155-3798-4f39-ab56-dffea2038df8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.314944 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pj5j\" (UniqueName: \"kubernetes.io/projected/c1079155-3798-4f39-ab56-dffea2038df8-kube-api-access-5pj5j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-88p79\" (UID: \"c1079155-3798-4f39-ab56-dffea2038df8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.315514 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1079155-3798-4f39-ab56-dffea2038df8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-88p79\" (UID: \"c1079155-3798-4f39-ab56-dffea2038df8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.315696 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1079155-3798-4f39-ab56-dffea2038df8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-88p79\" (UID: \"c1079155-3798-4f39-ab56-dffea2038df8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.417167 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pj5j\" (UniqueName: \"kubernetes.io/projected/c1079155-3798-4f39-ab56-dffea2038df8-kube-api-access-5pj5j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-88p79\" (UID: \"c1079155-3798-4f39-ab56-dffea2038df8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.417229 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1079155-3798-4f39-ab56-dffea2038df8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-88p79\" (UID: \"c1079155-3798-4f39-ab56-dffea2038df8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.417272 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1079155-3798-4f39-ab56-dffea2038df8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-88p79\" (UID: \"c1079155-3798-4f39-ab56-dffea2038df8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.417407 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1079155-3798-4f39-ab56-dffea2038df8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-88p79\" (UID: \"c1079155-3798-4f39-ab56-dffea2038df8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.421974 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1079155-3798-4f39-ab56-dffea2038df8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-88p79\" (UID: \"c1079155-3798-4f39-ab56-dffea2038df8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.423674 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1079155-3798-4f39-ab56-dffea2038df8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-88p79\" (UID: \"c1079155-3798-4f39-ab56-dffea2038df8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.424252 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1079155-3798-4f39-ab56-dffea2038df8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-88p79\" (UID: \"c1079155-3798-4f39-ab56-dffea2038df8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.437431 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pj5j\" (UniqueName: \"kubernetes.io/projected/c1079155-3798-4f39-ab56-dffea2038df8-kube-api-access-5pj5j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-88p79\" (UID: \"c1079155-3798-4f39-ab56-dffea2038df8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" Jan 26 13:24:34 crc kubenswrapper[4844]: I0126 13:24:34.613754 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" Jan 26 13:24:35 crc kubenswrapper[4844]: I0126 13:24:35.149089 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79"] Jan 26 13:24:35 crc kubenswrapper[4844]: I0126 13:24:35.171710 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" event={"ID":"c1079155-3798-4f39-ab56-dffea2038df8","Type":"ContainerStarted","Data":"37d54a1580286fb86f7d5a6182ccb76a006053b77dea27a7bfe64add510f104c"} Jan 26 13:24:36 crc kubenswrapper[4844]: I0126 13:24:36.181561 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" event={"ID":"c1079155-3798-4f39-ab56-dffea2038df8","Type":"ContainerStarted","Data":"375fcbc9e9ce500f7935c0373a1331986f1d90191544b224b1f547dbc49ee957"} Jan 26 13:24:36 crc kubenswrapper[4844]: I0126 13:24:36.208899 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" podStartSLOduration=1.670458332 podStartE2EDuration="2.208875979s" podCreationTimestamp="2026-01-26 13:24:34 +0000 UTC" firstStartedPulling="2026-01-26 13:24:35.14708736 +0000 UTC m=+2452.080454972" lastFinishedPulling="2026-01-26 13:24:35.685505007 +0000 UTC m=+2452.618872619" observedRunningTime="2026-01-26 13:24:36.201340087 +0000 UTC m=+2453.134707709" watchObservedRunningTime="2026-01-26 13:24:36.208875979 +0000 UTC m=+2453.142243591" Jan 26 13:24:42 crc kubenswrapper[4844]: I0126 13:24:42.313799 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:24:42 crc kubenswrapper[4844]: E0126 13:24:42.315835 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:24:57 crc kubenswrapper[4844]: I0126 13:24:57.314288 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:24:57 crc kubenswrapper[4844]: E0126 13:24:57.315716 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:25:08 crc kubenswrapper[4844]: I0126 13:25:08.314575 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:25:08 crc kubenswrapper[4844]: E0126 13:25:08.315893 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:25:19 crc kubenswrapper[4844]: I0126 13:25:19.313759 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:25:19 crc kubenswrapper[4844]: E0126 13:25:19.314363 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:25:31 crc kubenswrapper[4844]: I0126 13:25:31.964007 4844 scope.go:117] "RemoveContainer" containerID="2e9a84ce2b53137dcc0b605e1c8934f3ec81c8d1af469de9901b79a1914dbeb8" Jan 26 13:25:32 crc kubenswrapper[4844]: I0126 13:25:32.313555 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:25:32 crc kubenswrapper[4844]: E0126 13:25:32.313972 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:25:45 crc kubenswrapper[4844]: I0126 13:25:45.312834 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:25:45 crc kubenswrapper[4844]: E0126 13:25:45.313517 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:25:59 crc kubenswrapper[4844]: I0126 13:25:59.313693 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:25:59 crc kubenswrapper[4844]: E0126 13:25:59.314404 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:26:13 crc kubenswrapper[4844]: I0126 13:26:13.338779 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:26:14 crc kubenswrapper[4844]: I0126 13:26:14.411987 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"a82b801a0f9019b696e73b93e7bd511e023d38ac840f413770a1b3ad588c4466"} Jan 26 13:26:32 crc kubenswrapper[4844]: I0126 13:26:32.056952 4844 scope.go:117] "RemoveContainer" containerID="dd4ef9896a032c4f099137976f07aecb620fb6a4975a0ab3dfd0a22073c86bdc" Jan 26 13:26:32 crc kubenswrapper[4844]: I0126 13:26:32.097120 4844 scope.go:117] "RemoveContainer" containerID="e69fcb823d2f2ba4ebd708ec19d6a0178f2c712a5a302116f165693cdaf5ad60" Jan 26 13:26:47 crc kubenswrapper[4844]: I0126 13:26:47.304674 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k8gvx"] Jan 26 13:26:47 crc kubenswrapper[4844]: I0126 13:26:47.306965 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8gvx" Jan 26 13:26:47 crc kubenswrapper[4844]: I0126 13:26:47.324036 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k8gvx"] Jan 26 13:26:47 crc kubenswrapper[4844]: I0126 13:26:47.465828 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24bdfc81-39c4-4776-8342-73d62b114c19-catalog-content\") pod \"community-operators-k8gvx\" (UID: \"24bdfc81-39c4-4776-8342-73d62b114c19\") " pod="openshift-marketplace/community-operators-k8gvx" Jan 26 13:26:47 crc kubenswrapper[4844]: I0126 13:26:47.465931 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxdv5\" (UniqueName: \"kubernetes.io/projected/24bdfc81-39c4-4776-8342-73d62b114c19-kube-api-access-gxdv5\") pod \"community-operators-k8gvx\" (UID: \"24bdfc81-39c4-4776-8342-73d62b114c19\") " pod="openshift-marketplace/community-operators-k8gvx" Jan 26 13:26:47 crc kubenswrapper[4844]: I0126 13:26:47.465987 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24bdfc81-39c4-4776-8342-73d62b114c19-utilities\") pod \"community-operators-k8gvx\" (UID: \"24bdfc81-39c4-4776-8342-73d62b114c19\") " pod="openshift-marketplace/community-operators-k8gvx" Jan 26 13:26:47 crc kubenswrapper[4844]: I0126 13:26:47.567827 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24bdfc81-39c4-4776-8342-73d62b114c19-catalog-content\") pod \"community-operators-k8gvx\" (UID: \"24bdfc81-39c4-4776-8342-73d62b114c19\") " pod="openshift-marketplace/community-operators-k8gvx" Jan 26 13:26:47 crc kubenswrapper[4844]: I0126 13:26:47.567902 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxdv5\" (UniqueName: \"kubernetes.io/projected/24bdfc81-39c4-4776-8342-73d62b114c19-kube-api-access-gxdv5\") pod \"community-operators-k8gvx\" (UID: \"24bdfc81-39c4-4776-8342-73d62b114c19\") " pod="openshift-marketplace/community-operators-k8gvx" Jan 26 13:26:47 crc kubenswrapper[4844]: I0126 13:26:47.567937 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24bdfc81-39c4-4776-8342-73d62b114c19-utilities\") pod \"community-operators-k8gvx\" (UID: \"24bdfc81-39c4-4776-8342-73d62b114c19\") " pod="openshift-marketplace/community-operators-k8gvx" Jan 26 13:26:47 crc kubenswrapper[4844]: I0126 13:26:47.568312 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24bdfc81-39c4-4776-8342-73d62b114c19-catalog-content\") pod \"community-operators-k8gvx\" (UID: \"24bdfc81-39c4-4776-8342-73d62b114c19\") " pod="openshift-marketplace/community-operators-k8gvx" Jan 26 13:26:47 crc kubenswrapper[4844]: I0126 13:26:47.568426 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24bdfc81-39c4-4776-8342-73d62b114c19-utilities\") pod \"community-operators-k8gvx\" (UID: \"24bdfc81-39c4-4776-8342-73d62b114c19\") " pod="openshift-marketplace/community-operators-k8gvx" Jan 26 13:26:47 crc kubenswrapper[4844]: I0126 13:26:47.586926 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxdv5\" (UniqueName: \"kubernetes.io/projected/24bdfc81-39c4-4776-8342-73d62b114c19-kube-api-access-gxdv5\") pod \"community-operators-k8gvx\" (UID: \"24bdfc81-39c4-4776-8342-73d62b114c19\") " pod="openshift-marketplace/community-operators-k8gvx" Jan 26 13:26:47 crc kubenswrapper[4844]: I0126 13:26:47.642377 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8gvx" Jan 26 13:26:48 crc kubenswrapper[4844]: I0126 13:26:48.152892 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k8gvx"] Jan 26 13:26:48 crc kubenswrapper[4844]: I0126 13:26:48.777668 4844 generic.go:334] "Generic (PLEG): container finished" podID="24bdfc81-39c4-4776-8342-73d62b114c19" containerID="099c3c4cd1517a011441e2e0f0cea860bea33322e649b1f4f3dfb42d5f5b3352" exitCode=0 Jan 26 13:26:48 crc kubenswrapper[4844]: I0126 13:26:48.777721 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8gvx" event={"ID":"24bdfc81-39c4-4776-8342-73d62b114c19","Type":"ContainerDied","Data":"099c3c4cd1517a011441e2e0f0cea860bea33322e649b1f4f3dfb42d5f5b3352"} Jan 26 13:26:48 crc kubenswrapper[4844]: I0126 13:26:48.777902 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8gvx" event={"ID":"24bdfc81-39c4-4776-8342-73d62b114c19","Type":"ContainerStarted","Data":"73ba0afa88379e4cd37080ce898b6f417b88c0022201d352717362565ef5d796"} Jan 26 13:26:48 crc kubenswrapper[4844]: I0126 13:26:48.779807 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 13:26:49 crc kubenswrapper[4844]: I0126 13:26:49.790615 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8gvx" event={"ID":"24bdfc81-39c4-4776-8342-73d62b114c19","Type":"ContainerStarted","Data":"8251eafba61441fc1e45cdb9d6f189f4c4161ca1040862b4693ed1d817e82b7b"} Jan 26 13:26:50 crc kubenswrapper[4844]: I0126 13:26:50.802235 4844 generic.go:334] "Generic (PLEG): container finished" podID="24bdfc81-39c4-4776-8342-73d62b114c19" containerID="8251eafba61441fc1e45cdb9d6f189f4c4161ca1040862b4693ed1d817e82b7b" exitCode=0 Jan 26 13:26:50 crc kubenswrapper[4844]: I0126 13:26:50.802309 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8gvx" event={"ID":"24bdfc81-39c4-4776-8342-73d62b114c19","Type":"ContainerDied","Data":"8251eafba61441fc1e45cdb9d6f189f4c4161ca1040862b4693ed1d817e82b7b"} Jan 26 13:26:53 crc kubenswrapper[4844]: I0126 13:26:53.835428 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8gvx" event={"ID":"24bdfc81-39c4-4776-8342-73d62b114c19","Type":"ContainerStarted","Data":"f121c19af082de034f61ce692a65316195780563fcbaca170b06a7122d25a9c4"} Jan 26 13:26:53 crc kubenswrapper[4844]: I0126 13:26:53.867521 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k8gvx" podStartSLOduration=2.5038097009999998 podStartE2EDuration="6.867504928s" podCreationTimestamp="2026-01-26 13:26:47 +0000 UTC" firstStartedPulling="2026-01-26 13:26:48.77958802 +0000 UTC m=+2585.712955632" lastFinishedPulling="2026-01-26 13:26:53.143283247 +0000 UTC m=+2590.076650859" observedRunningTime="2026-01-26 13:26:53.858281786 +0000 UTC m=+2590.791649418" watchObservedRunningTime="2026-01-26 13:26:53.867504928 +0000 UTC m=+2590.800872540" Jan 26 13:26:57 crc kubenswrapper[4844]: I0126 13:26:57.643391 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k8gvx" Jan 26 13:26:57 crc kubenswrapper[4844]: I0126 13:26:57.643941 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k8gvx" Jan 26 13:26:57 crc kubenswrapper[4844]: I0126 13:26:57.696763 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k8gvx" Jan 26 13:27:07 crc kubenswrapper[4844]: I0126 13:27:07.718849 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k8gvx" Jan 26 13:27:07 crc kubenswrapper[4844]: I0126 13:27:07.773230 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k8gvx"] Jan 26 13:27:07 crc kubenswrapper[4844]: I0126 13:27:07.980291 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k8gvx" podUID="24bdfc81-39c4-4776-8342-73d62b114c19" containerName="registry-server" containerID="cri-o://f121c19af082de034f61ce692a65316195780563fcbaca170b06a7122d25a9c4" gracePeriod=2 Jan 26 13:27:08 crc kubenswrapper[4844]: I0126 13:27:08.389089 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8gvx" Jan 26 13:27:08 crc kubenswrapper[4844]: I0126 13:27:08.495786 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxdv5\" (UniqueName: \"kubernetes.io/projected/24bdfc81-39c4-4776-8342-73d62b114c19-kube-api-access-gxdv5\") pod \"24bdfc81-39c4-4776-8342-73d62b114c19\" (UID: \"24bdfc81-39c4-4776-8342-73d62b114c19\") " Jan 26 13:27:08 crc kubenswrapper[4844]: I0126 13:27:08.496021 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24bdfc81-39c4-4776-8342-73d62b114c19-catalog-content\") pod \"24bdfc81-39c4-4776-8342-73d62b114c19\" (UID: \"24bdfc81-39c4-4776-8342-73d62b114c19\") " Jan 26 13:27:08 crc kubenswrapper[4844]: I0126 13:27:08.496069 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24bdfc81-39c4-4776-8342-73d62b114c19-utilities\") pod \"24bdfc81-39c4-4776-8342-73d62b114c19\" (UID: \"24bdfc81-39c4-4776-8342-73d62b114c19\") " Jan 26 13:27:08 crc kubenswrapper[4844]: I0126 13:27:08.496871 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24bdfc81-39c4-4776-8342-73d62b114c19-utilities" (OuterVolumeSpecName: "utilities") pod "24bdfc81-39c4-4776-8342-73d62b114c19" (UID: "24bdfc81-39c4-4776-8342-73d62b114c19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:27:08 crc kubenswrapper[4844]: I0126 13:27:08.502633 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24bdfc81-39c4-4776-8342-73d62b114c19-kube-api-access-gxdv5" (OuterVolumeSpecName: "kube-api-access-gxdv5") pod "24bdfc81-39c4-4776-8342-73d62b114c19" (UID: "24bdfc81-39c4-4776-8342-73d62b114c19"). InnerVolumeSpecName "kube-api-access-gxdv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:27:08 crc kubenswrapper[4844]: I0126 13:27:08.542125 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24bdfc81-39c4-4776-8342-73d62b114c19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24bdfc81-39c4-4776-8342-73d62b114c19" (UID: "24bdfc81-39c4-4776-8342-73d62b114c19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:27:08 crc kubenswrapper[4844]: I0126 13:27:08.600178 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxdv5\" (UniqueName: \"kubernetes.io/projected/24bdfc81-39c4-4776-8342-73d62b114c19-kube-api-access-gxdv5\") on node \"crc\" DevicePath \"\"" Jan 26 13:27:08 crc kubenswrapper[4844]: I0126 13:27:08.600243 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24bdfc81-39c4-4776-8342-73d62b114c19-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:27:08 crc kubenswrapper[4844]: I0126 13:27:08.600258 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24bdfc81-39c4-4776-8342-73d62b114c19-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:27:08 crc kubenswrapper[4844]: I0126 13:27:08.999651 4844 generic.go:334] "Generic (PLEG): container finished" podID="24bdfc81-39c4-4776-8342-73d62b114c19" containerID="f121c19af082de034f61ce692a65316195780563fcbaca170b06a7122d25a9c4" exitCode=0 Jan 26 13:27:08 crc kubenswrapper[4844]: I0126 13:27:08.999702 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8gvx" event={"ID":"24bdfc81-39c4-4776-8342-73d62b114c19","Type":"ContainerDied","Data":"f121c19af082de034f61ce692a65316195780563fcbaca170b06a7122d25a9c4"} Jan 26 13:27:08 crc kubenswrapper[4844]: I0126 13:27:08.999735 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8gvx" event={"ID":"24bdfc81-39c4-4776-8342-73d62b114c19","Type":"ContainerDied","Data":"73ba0afa88379e4cd37080ce898b6f417b88c0022201d352717362565ef5d796"} Jan 26 13:27:08 crc kubenswrapper[4844]: I0126 13:27:08.999786 4844 scope.go:117] "RemoveContainer" containerID="f121c19af082de034f61ce692a65316195780563fcbaca170b06a7122d25a9c4" Jan 26 13:27:08 crc kubenswrapper[4844]: I0126 13:27:08.999899 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8gvx" Jan 26 13:27:09 crc kubenswrapper[4844]: I0126 13:27:09.041926 4844 scope.go:117] "RemoveContainer" containerID="8251eafba61441fc1e45cdb9d6f189f4c4161ca1040862b4693ed1d817e82b7b" Jan 26 13:27:09 crc kubenswrapper[4844]: I0126 13:27:09.073531 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k8gvx"] Jan 26 13:27:09 crc kubenswrapper[4844]: I0126 13:27:09.089742 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k8gvx"] Jan 26 13:27:09 crc kubenswrapper[4844]: I0126 13:27:09.102073 4844 scope.go:117] "RemoveContainer" containerID="099c3c4cd1517a011441e2e0f0cea860bea33322e649b1f4f3dfb42d5f5b3352" Jan 26 13:27:09 crc kubenswrapper[4844]: I0126 13:27:09.129611 4844 scope.go:117] "RemoveContainer" containerID="f121c19af082de034f61ce692a65316195780563fcbaca170b06a7122d25a9c4" Jan 26 13:27:09 crc kubenswrapper[4844]: E0126 13:27:09.130103 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f121c19af082de034f61ce692a65316195780563fcbaca170b06a7122d25a9c4\": container with ID starting with f121c19af082de034f61ce692a65316195780563fcbaca170b06a7122d25a9c4 not found: ID does not exist" containerID="f121c19af082de034f61ce692a65316195780563fcbaca170b06a7122d25a9c4" Jan 26 13:27:09 crc kubenswrapper[4844]: I0126 13:27:09.130143 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f121c19af082de034f61ce692a65316195780563fcbaca170b06a7122d25a9c4"} err="failed to get container status \"f121c19af082de034f61ce692a65316195780563fcbaca170b06a7122d25a9c4\": rpc error: code = NotFound desc = could not find container \"f121c19af082de034f61ce692a65316195780563fcbaca170b06a7122d25a9c4\": container with ID starting with f121c19af082de034f61ce692a65316195780563fcbaca170b06a7122d25a9c4 not found: ID does not exist" Jan 26 13:27:09 crc kubenswrapper[4844]: I0126 13:27:09.130168 4844 scope.go:117] "RemoveContainer" containerID="8251eafba61441fc1e45cdb9d6f189f4c4161ca1040862b4693ed1d817e82b7b" Jan 26 13:27:09 crc kubenswrapper[4844]: E0126 13:27:09.130415 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8251eafba61441fc1e45cdb9d6f189f4c4161ca1040862b4693ed1d817e82b7b\": container with ID starting with 8251eafba61441fc1e45cdb9d6f189f4c4161ca1040862b4693ed1d817e82b7b not found: ID does not exist" containerID="8251eafba61441fc1e45cdb9d6f189f4c4161ca1040862b4693ed1d817e82b7b" Jan 26 13:27:09 crc kubenswrapper[4844]: I0126 13:27:09.130444 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8251eafba61441fc1e45cdb9d6f189f4c4161ca1040862b4693ed1d817e82b7b"} err="failed to get container status \"8251eafba61441fc1e45cdb9d6f189f4c4161ca1040862b4693ed1d817e82b7b\": rpc error: code = NotFound desc = could not find container \"8251eafba61441fc1e45cdb9d6f189f4c4161ca1040862b4693ed1d817e82b7b\": container with ID starting with 8251eafba61441fc1e45cdb9d6f189f4c4161ca1040862b4693ed1d817e82b7b not found: ID does not exist" Jan 26 13:27:09 crc kubenswrapper[4844]: I0126 13:27:09.130462 4844 scope.go:117] "RemoveContainer" containerID="099c3c4cd1517a011441e2e0f0cea860bea33322e649b1f4f3dfb42d5f5b3352" Jan 26 13:27:09 crc kubenswrapper[4844]: E0126 13:27:09.130731 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"099c3c4cd1517a011441e2e0f0cea860bea33322e649b1f4f3dfb42d5f5b3352\": container with ID starting with 099c3c4cd1517a011441e2e0f0cea860bea33322e649b1f4f3dfb42d5f5b3352 not found: ID does not exist" containerID="099c3c4cd1517a011441e2e0f0cea860bea33322e649b1f4f3dfb42d5f5b3352" Jan 26 13:27:09 crc kubenswrapper[4844]: I0126 13:27:09.130759 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"099c3c4cd1517a011441e2e0f0cea860bea33322e649b1f4f3dfb42d5f5b3352"} err="failed to get container status \"099c3c4cd1517a011441e2e0f0cea860bea33322e649b1f4f3dfb42d5f5b3352\": rpc error: code = NotFound desc = could not find container \"099c3c4cd1517a011441e2e0f0cea860bea33322e649b1f4f3dfb42d5f5b3352\": container with ID starting with 099c3c4cd1517a011441e2e0f0cea860bea33322e649b1f4f3dfb42d5f5b3352 not found: ID does not exist" Jan 26 13:27:09 crc kubenswrapper[4844]: I0126 13:27:09.325830 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24bdfc81-39c4-4776-8342-73d62b114c19" path="/var/lib/kubelet/pods/24bdfc81-39c4-4776-8342-73d62b114c19/volumes" Jan 26 13:27:32 crc kubenswrapper[4844]: I0126 13:27:32.176046 4844 scope.go:117] "RemoveContainer" containerID="d56047903967d5cce23e20c92cae8ddad5f39ac4f2cd51ecde31da6e601d1ff6" Jan 26 13:27:32 crc kubenswrapper[4844]: I0126 13:27:32.201393 4844 scope.go:117] "RemoveContainer" containerID="74f7af7c9d5379d337106062b055dd88f5a20191180577a90d2a22c5d34c333c" Jan 26 13:27:32 crc kubenswrapper[4844]: I0126 13:27:32.222379 4844 scope.go:117] "RemoveContainer" containerID="f4ed873d07844e5d8877f033b1347e4e2cd4b447cf390ba46d048b6bd2c7028f" Jan 26 13:27:32 crc kubenswrapper[4844]: I0126 13:27:32.247021 4844 scope.go:117] "RemoveContainer" containerID="08185805e86068bdcb89060f5bf0ed51e131aa2a717b2d82d6b647ab1a7895fd" Jan 26 13:27:32 crc kubenswrapper[4844]: I0126 13:27:32.270716 4844 scope.go:117] "RemoveContainer" containerID="f778593c77f19cd971369cd93f107ce9557b6ff677fcdb7bf966fe9cde611212" Jan 26 13:27:33 crc kubenswrapper[4844]: I0126 13:27:33.621570 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dpxlp"] Jan 26 13:27:33 crc kubenswrapper[4844]: E0126 13:27:33.622991 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24bdfc81-39c4-4776-8342-73d62b114c19" containerName="extract-content" Jan 26 13:27:33 crc kubenswrapper[4844]: I0126 13:27:33.623010 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="24bdfc81-39c4-4776-8342-73d62b114c19" containerName="extract-content" Jan 26 13:27:33 crc kubenswrapper[4844]: E0126 13:27:33.623050 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24bdfc81-39c4-4776-8342-73d62b114c19" containerName="registry-server" Jan 26 13:27:33 crc kubenswrapper[4844]: I0126 13:27:33.623058 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="24bdfc81-39c4-4776-8342-73d62b114c19" containerName="registry-server" Jan 26 13:27:33 crc kubenswrapper[4844]: E0126 13:27:33.623094 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24bdfc81-39c4-4776-8342-73d62b114c19" containerName="extract-utilities" Jan 26 13:27:33 crc kubenswrapper[4844]: I0126 13:27:33.623103 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="24bdfc81-39c4-4776-8342-73d62b114c19" containerName="extract-utilities" Jan 26 13:27:33 crc kubenswrapper[4844]: I0126 13:27:33.623617 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="24bdfc81-39c4-4776-8342-73d62b114c19" containerName="registry-server" Jan 26 13:27:33 crc kubenswrapper[4844]: I0126 13:27:33.628752 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpxlp" Jan 26 13:27:33 crc kubenswrapper[4844]: I0126 13:27:33.673701 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dpxlp"] Jan 26 13:27:33 crc kubenswrapper[4844]: I0126 13:27:33.824001 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bxdm\" (UniqueName: \"kubernetes.io/projected/bde2be4c-34fd-4810-8e29-05bfde8feda0-kube-api-access-9bxdm\") pod \"certified-operators-dpxlp\" (UID: \"bde2be4c-34fd-4810-8e29-05bfde8feda0\") " pod="openshift-marketplace/certified-operators-dpxlp" Jan 26 13:27:33 crc kubenswrapper[4844]: I0126 13:27:33.824059 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bde2be4c-34fd-4810-8e29-05bfde8feda0-utilities\") pod \"certified-operators-dpxlp\" (UID: \"bde2be4c-34fd-4810-8e29-05bfde8feda0\") " pod="openshift-marketplace/certified-operators-dpxlp" Jan 26 13:27:33 crc kubenswrapper[4844]: I0126 13:27:33.824109 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bde2be4c-34fd-4810-8e29-05bfde8feda0-catalog-content\") pod \"certified-operators-dpxlp\" (UID: \"bde2be4c-34fd-4810-8e29-05bfde8feda0\") " pod="openshift-marketplace/certified-operators-dpxlp" Jan 26 13:27:33 crc kubenswrapper[4844]: I0126 13:27:33.926013 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bxdm\" (UniqueName: \"kubernetes.io/projected/bde2be4c-34fd-4810-8e29-05bfde8feda0-kube-api-access-9bxdm\") pod \"certified-operators-dpxlp\" (UID: \"bde2be4c-34fd-4810-8e29-05bfde8feda0\") " pod="openshift-marketplace/certified-operators-dpxlp" Jan 26 13:27:33 crc kubenswrapper[4844]: I0126 13:27:33.926093 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bde2be4c-34fd-4810-8e29-05bfde8feda0-utilities\") pod \"certified-operators-dpxlp\" (UID: \"bde2be4c-34fd-4810-8e29-05bfde8feda0\") " pod="openshift-marketplace/certified-operators-dpxlp" Jan 26 13:27:33 crc kubenswrapper[4844]: I0126 13:27:33.926156 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bde2be4c-34fd-4810-8e29-05bfde8feda0-catalog-content\") pod \"certified-operators-dpxlp\" (UID: \"bde2be4c-34fd-4810-8e29-05bfde8feda0\") " pod="openshift-marketplace/certified-operators-dpxlp" Jan 26 13:27:33 crc kubenswrapper[4844]: I0126 13:27:33.926685 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bde2be4c-34fd-4810-8e29-05bfde8feda0-utilities\") pod \"certified-operators-dpxlp\" (UID: \"bde2be4c-34fd-4810-8e29-05bfde8feda0\") " pod="openshift-marketplace/certified-operators-dpxlp" Jan 26 13:27:33 crc kubenswrapper[4844]: I0126 13:27:33.926752 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bde2be4c-34fd-4810-8e29-05bfde8feda0-catalog-content\") pod \"certified-operators-dpxlp\" (UID: \"bde2be4c-34fd-4810-8e29-05bfde8feda0\") " pod="openshift-marketplace/certified-operators-dpxlp" Jan 26 13:27:33 crc kubenswrapper[4844]: I0126 13:27:33.950401 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bxdm\" (UniqueName: \"kubernetes.io/projected/bde2be4c-34fd-4810-8e29-05bfde8feda0-kube-api-access-9bxdm\") pod \"certified-operators-dpxlp\" (UID: \"bde2be4c-34fd-4810-8e29-05bfde8feda0\") " pod="openshift-marketplace/certified-operators-dpxlp" Jan 26 13:27:33 crc kubenswrapper[4844]: I0126 13:27:33.971641 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpxlp" Jan 26 13:27:34 crc kubenswrapper[4844]: I0126 13:27:34.491496 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dpxlp"] Jan 26 13:27:35 crc kubenswrapper[4844]: I0126 13:27:35.301194 4844 generic.go:334] "Generic (PLEG): container finished" podID="bde2be4c-34fd-4810-8e29-05bfde8feda0" containerID="6440dbbdc677b69f20d36d2b627b3af8260145adec21e1f6152cfb0df5e424a1" exitCode=0 Jan 26 13:27:35 crc kubenswrapper[4844]: I0126 13:27:35.301237 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpxlp" event={"ID":"bde2be4c-34fd-4810-8e29-05bfde8feda0","Type":"ContainerDied","Data":"6440dbbdc677b69f20d36d2b627b3af8260145adec21e1f6152cfb0df5e424a1"} Jan 26 13:27:35 crc kubenswrapper[4844]: I0126 13:27:35.301265 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpxlp" event={"ID":"bde2be4c-34fd-4810-8e29-05bfde8feda0","Type":"ContainerStarted","Data":"0ab65f47a92c163b49a22149abe038bd354c1cfee6bd383b6ac8c05cb01c44fc"} Jan 26 13:27:37 crc kubenswrapper[4844]: I0126 13:27:37.346693 4844 generic.go:334] "Generic (PLEG): container finished" podID="bde2be4c-34fd-4810-8e29-05bfde8feda0" containerID="efa84074c1bad4763b7b95cf2b26573828faf0da880df918d869b295de8f498d" exitCode=0 Jan 26 13:27:37 crc kubenswrapper[4844]: I0126 13:27:37.346783 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpxlp" event={"ID":"bde2be4c-34fd-4810-8e29-05bfde8feda0","Type":"ContainerDied","Data":"efa84074c1bad4763b7b95cf2b26573828faf0da880df918d869b295de8f498d"} Jan 26 13:27:38 crc kubenswrapper[4844]: I0126 13:27:38.051349 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-561e-account-create-update-9g4xg"] Jan 26 13:27:38 crc kubenswrapper[4844]: I0126 13:27:38.061038 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-561e-account-create-update-9g4xg"] Jan 26 13:27:38 crc kubenswrapper[4844]: I0126 13:27:38.358719 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpxlp" event={"ID":"bde2be4c-34fd-4810-8e29-05bfde8feda0","Type":"ContainerStarted","Data":"ce95b0a6457e98586ec34d5ea681cbd04d26f3065161bca1be9213aeefd636ec"} Jan 26 13:27:38 crc kubenswrapper[4844]: I0126 13:27:38.378287 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dpxlp" podStartSLOduration=2.801188919 podStartE2EDuration="5.37826613s" podCreationTimestamp="2026-01-26 13:27:33 +0000 UTC" firstStartedPulling="2026-01-26 13:27:35.303306614 +0000 UTC m=+2632.236674236" lastFinishedPulling="2026-01-26 13:27:37.880383835 +0000 UTC m=+2634.813751447" observedRunningTime="2026-01-26 13:27:38.37738715 +0000 UTC m=+2635.310754782" watchObservedRunningTime="2026-01-26 13:27:38.37826613 +0000 UTC m=+2635.311633742" Jan 26 13:27:39 crc kubenswrapper[4844]: I0126 13:27:39.048925 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-ab2d-account-create-update-fjgzg"] Jan 26 13:27:39 crc kubenswrapper[4844]: I0126 13:27:39.063384 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-jq7ln"] Jan 26 13:27:39 crc kubenswrapper[4844]: I0126 13:27:39.077646 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-hstpj"] Jan 26 13:27:39 crc kubenswrapper[4844]: I0126 13:27:39.088417 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-hstpj"] Jan 26 13:27:39 crc kubenswrapper[4844]: I0126 13:27:39.098295 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-jq7ln"] Jan 26 13:27:39 crc kubenswrapper[4844]: I0126 13:27:39.107106 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-lrffj"] Jan 26 13:27:39 crc kubenswrapper[4844]: I0126 13:27:39.116968 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-lrffj"] Jan 26 13:27:39 crc kubenswrapper[4844]: I0126 13:27:39.128757 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-ab2d-account-create-update-fjgzg"] Jan 26 13:27:39 crc kubenswrapper[4844]: I0126 13:27:39.138732 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-eca9-account-create-update-8q2q2"] Jan 26 13:27:39 crc kubenswrapper[4844]: I0126 13:27:39.146566 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-eca9-account-create-update-8q2q2"] Jan 26 13:27:39 crc kubenswrapper[4844]: I0126 13:27:39.327820 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="185bd916-a6be-4d5f-851b-260ad742e54e" path="/var/lib/kubelet/pods/185bd916-a6be-4d5f-851b-260ad742e54e/volumes" Jan 26 13:27:39 crc kubenswrapper[4844]: I0126 13:27:39.329044 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb93e78-de86-442b-b44d-6b3281ca3618" path="/var/lib/kubelet/pods/6fb93e78-de86-442b-b44d-6b3281ca3618/volumes" Jan 26 13:27:39 crc kubenswrapper[4844]: I0126 13:27:39.330378 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ad24d6d-9838-4344-be0c-777f0c6c6246" path="/var/lib/kubelet/pods/8ad24d6d-9838-4344-be0c-777f0c6c6246/volumes" Jan 26 13:27:39 crc kubenswrapper[4844]: I0126 13:27:39.331559 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a80cb87d-d461-4f90-8727-d6958eb5dac2" path="/var/lib/kubelet/pods/a80cb87d-d461-4f90-8727-d6958eb5dac2/volumes" Jan 26 13:27:39 crc kubenswrapper[4844]: I0126 13:27:39.333012 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="babcb55b-51b8-4031-a9e6-49df01680aa5" path="/var/lib/kubelet/pods/babcb55b-51b8-4031-a9e6-49df01680aa5/volumes" Jan 26 13:27:39 crc kubenswrapper[4844]: I0126 13:27:39.334256 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bceea47e-5bf5-412a-a8d9-9c50e01d4c76" path="/var/lib/kubelet/pods/bceea47e-5bf5-412a-a8d9-9c50e01d4c76/volumes" Jan 26 13:27:43 crc kubenswrapper[4844]: I0126 13:27:43.972153 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dpxlp" Jan 26 13:27:43 crc kubenswrapper[4844]: I0126 13:27:43.972647 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dpxlp" Jan 26 13:27:44 crc kubenswrapper[4844]: I0126 13:27:44.061027 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dpxlp" Jan 26 13:27:44 crc kubenswrapper[4844]: I0126 13:27:44.477287 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dpxlp" Jan 26 13:27:44 crc kubenswrapper[4844]: I0126 13:27:44.532366 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dpxlp"] Jan 26 13:27:46 crc kubenswrapper[4844]: I0126 13:27:46.448047 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dpxlp" podUID="bde2be4c-34fd-4810-8e29-05bfde8feda0" containerName="registry-server" containerID="cri-o://ce95b0a6457e98586ec34d5ea681cbd04d26f3065161bca1be9213aeefd636ec" gracePeriod=2 Jan 26 13:27:47 crc kubenswrapper[4844]: I0126 13:27:47.458803 4844 generic.go:334] "Generic (PLEG): container finished" podID="bde2be4c-34fd-4810-8e29-05bfde8feda0" containerID="ce95b0a6457e98586ec34d5ea681cbd04d26f3065161bca1be9213aeefd636ec" exitCode=0 Jan 26 13:27:47 crc kubenswrapper[4844]: I0126 13:27:47.458875 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpxlp" event={"ID":"bde2be4c-34fd-4810-8e29-05bfde8feda0","Type":"ContainerDied","Data":"ce95b0a6457e98586ec34d5ea681cbd04d26f3065161bca1be9213aeefd636ec"} Jan 26 13:27:47 crc kubenswrapper[4844]: I0126 13:27:47.459025 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpxlp" event={"ID":"bde2be4c-34fd-4810-8e29-05bfde8feda0","Type":"ContainerDied","Data":"0ab65f47a92c163b49a22149abe038bd354c1cfee6bd383b6ac8c05cb01c44fc"} Jan 26 13:27:47 crc kubenswrapper[4844]: I0126 13:27:47.459037 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ab65f47a92c163b49a22149abe038bd354c1cfee6bd383b6ac8c05cb01c44fc" Jan 26 13:27:47 crc kubenswrapper[4844]: I0126 13:27:47.480974 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpxlp" Jan 26 13:27:47 crc kubenswrapper[4844]: I0126 13:27:47.615401 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bde2be4c-34fd-4810-8e29-05bfde8feda0-utilities\") pod \"bde2be4c-34fd-4810-8e29-05bfde8feda0\" (UID: \"bde2be4c-34fd-4810-8e29-05bfde8feda0\") " Jan 26 13:27:47 crc kubenswrapper[4844]: I0126 13:27:47.615646 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bxdm\" (UniqueName: \"kubernetes.io/projected/bde2be4c-34fd-4810-8e29-05bfde8feda0-kube-api-access-9bxdm\") pod \"bde2be4c-34fd-4810-8e29-05bfde8feda0\" (UID: \"bde2be4c-34fd-4810-8e29-05bfde8feda0\") " Jan 26 13:27:47 crc kubenswrapper[4844]: I0126 13:27:47.615694 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bde2be4c-34fd-4810-8e29-05bfde8feda0-catalog-content\") pod \"bde2be4c-34fd-4810-8e29-05bfde8feda0\" (UID: \"bde2be4c-34fd-4810-8e29-05bfde8feda0\") " Jan 26 13:27:47 crc kubenswrapper[4844]: I0126 13:27:47.616638 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bde2be4c-34fd-4810-8e29-05bfde8feda0-utilities" (OuterVolumeSpecName: "utilities") pod "bde2be4c-34fd-4810-8e29-05bfde8feda0" (UID: "bde2be4c-34fd-4810-8e29-05bfde8feda0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:27:47 crc kubenswrapper[4844]: I0126 13:27:47.622119 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde2be4c-34fd-4810-8e29-05bfde8feda0-kube-api-access-9bxdm" (OuterVolumeSpecName: "kube-api-access-9bxdm") pod "bde2be4c-34fd-4810-8e29-05bfde8feda0" (UID: "bde2be4c-34fd-4810-8e29-05bfde8feda0"). InnerVolumeSpecName "kube-api-access-9bxdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:27:47 crc kubenswrapper[4844]: I0126 13:27:47.662886 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bde2be4c-34fd-4810-8e29-05bfde8feda0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bde2be4c-34fd-4810-8e29-05bfde8feda0" (UID: "bde2be4c-34fd-4810-8e29-05bfde8feda0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:27:47 crc kubenswrapper[4844]: I0126 13:27:47.718130 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bxdm\" (UniqueName: \"kubernetes.io/projected/bde2be4c-34fd-4810-8e29-05bfde8feda0-kube-api-access-9bxdm\") on node \"crc\" DevicePath \"\"" Jan 26 13:27:47 crc kubenswrapper[4844]: I0126 13:27:47.718162 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bde2be4c-34fd-4810-8e29-05bfde8feda0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:27:47 crc kubenswrapper[4844]: I0126 13:27:47.718172 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bde2be4c-34fd-4810-8e29-05bfde8feda0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:27:48 crc kubenswrapper[4844]: I0126 13:27:48.468234 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpxlp" Jan 26 13:27:48 crc kubenswrapper[4844]: I0126 13:27:48.511326 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dpxlp"] Jan 26 13:27:48 crc kubenswrapper[4844]: I0126 13:27:48.524125 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dpxlp"] Jan 26 13:27:49 crc kubenswrapper[4844]: I0126 13:27:49.324894 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bde2be4c-34fd-4810-8e29-05bfde8feda0" path="/var/lib/kubelet/pods/bde2be4c-34fd-4810-8e29-05bfde8feda0/volumes" Jan 26 13:27:52 crc kubenswrapper[4844]: I0126 13:27:52.043343 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-s92kk"] Jan 26 13:27:52 crc kubenswrapper[4844]: I0126 13:27:52.058610 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-s92kk"] Jan 26 13:27:53 crc kubenswrapper[4844]: I0126 13:27:53.344458 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08a4b367-08ab-438d-867c-dc0752837f18" path="/var/lib/kubelet/pods/08a4b367-08ab-438d-867c-dc0752837f18/volumes" Jan 26 13:28:11 crc kubenswrapper[4844]: I0126 13:28:11.692817 4844 generic.go:334] "Generic (PLEG): container finished" podID="c1079155-3798-4f39-ab56-dffea2038df8" containerID="375fcbc9e9ce500f7935c0373a1331986f1d90191544b224b1f547dbc49ee957" exitCode=0 Jan 26 13:28:11 crc kubenswrapper[4844]: I0126 13:28:11.692891 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" event={"ID":"c1079155-3798-4f39-ab56-dffea2038df8","Type":"ContainerDied","Data":"375fcbc9e9ce500f7935c0373a1331986f1d90191544b224b1f547dbc49ee957"} Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.083073 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.223284 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1079155-3798-4f39-ab56-dffea2038df8-inventory\") pod \"c1079155-3798-4f39-ab56-dffea2038df8\" (UID: \"c1079155-3798-4f39-ab56-dffea2038df8\") " Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.223783 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pj5j\" (UniqueName: \"kubernetes.io/projected/c1079155-3798-4f39-ab56-dffea2038df8-kube-api-access-5pj5j\") pod \"c1079155-3798-4f39-ab56-dffea2038df8\" (UID: \"c1079155-3798-4f39-ab56-dffea2038df8\") " Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.223903 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1079155-3798-4f39-ab56-dffea2038df8-ssh-key-openstack-edpm-ipam\") pod \"c1079155-3798-4f39-ab56-dffea2038df8\" (UID: \"c1079155-3798-4f39-ab56-dffea2038df8\") " Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.224028 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1079155-3798-4f39-ab56-dffea2038df8-bootstrap-combined-ca-bundle\") pod \"c1079155-3798-4f39-ab56-dffea2038df8\" (UID: \"c1079155-3798-4f39-ab56-dffea2038df8\") " Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.228710 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1079155-3798-4f39-ab56-dffea2038df8-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c1079155-3798-4f39-ab56-dffea2038df8" (UID: "c1079155-3798-4f39-ab56-dffea2038df8"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.228836 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1079155-3798-4f39-ab56-dffea2038df8-kube-api-access-5pj5j" (OuterVolumeSpecName: "kube-api-access-5pj5j") pod "c1079155-3798-4f39-ab56-dffea2038df8" (UID: "c1079155-3798-4f39-ab56-dffea2038df8"). InnerVolumeSpecName "kube-api-access-5pj5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.250157 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1079155-3798-4f39-ab56-dffea2038df8-inventory" (OuterVolumeSpecName: "inventory") pod "c1079155-3798-4f39-ab56-dffea2038df8" (UID: "c1079155-3798-4f39-ab56-dffea2038df8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.258414 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1079155-3798-4f39-ab56-dffea2038df8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c1079155-3798-4f39-ab56-dffea2038df8" (UID: "c1079155-3798-4f39-ab56-dffea2038df8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.326298 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pj5j\" (UniqueName: \"kubernetes.io/projected/c1079155-3798-4f39-ab56-dffea2038df8-kube-api-access-5pj5j\") on node \"crc\" DevicePath \"\"" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.326332 4844 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1079155-3798-4f39-ab56-dffea2038df8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.326342 4844 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1079155-3798-4f39-ab56-dffea2038df8-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.326353 4844 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1079155-3798-4f39-ab56-dffea2038df8-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.716352 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" event={"ID":"c1079155-3798-4f39-ab56-dffea2038df8","Type":"ContainerDied","Data":"37d54a1580286fb86f7d5a6182ccb76a006053b77dea27a7bfe64add510f104c"} Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.716399 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37d54a1580286fb86f7d5a6182ccb76a006053b77dea27a7bfe64add510f104c" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.716461 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-88p79" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.804303 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx"] Jan 26 13:28:13 crc kubenswrapper[4844]: E0126 13:28:13.804818 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde2be4c-34fd-4810-8e29-05bfde8feda0" containerName="extract-utilities" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.804844 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde2be4c-34fd-4810-8e29-05bfde8feda0" containerName="extract-utilities" Jan 26 13:28:13 crc kubenswrapper[4844]: E0126 13:28:13.804857 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1079155-3798-4f39-ab56-dffea2038df8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.804866 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1079155-3798-4f39-ab56-dffea2038df8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 13:28:13 crc kubenswrapper[4844]: E0126 13:28:13.804920 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde2be4c-34fd-4810-8e29-05bfde8feda0" containerName="extract-content" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.804928 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde2be4c-34fd-4810-8e29-05bfde8feda0" containerName="extract-content" Jan 26 13:28:13 crc kubenswrapper[4844]: E0126 13:28:13.804964 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde2be4c-34fd-4810-8e29-05bfde8feda0" containerName="registry-server" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.804971 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde2be4c-34fd-4810-8e29-05bfde8feda0" containerName="registry-server" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.805152 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde2be4c-34fd-4810-8e29-05bfde8feda0" containerName="registry-server" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.805174 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1079155-3798-4f39-ab56-dffea2038df8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.805964 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.807471 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.808243 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.808441 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.808637 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r4j2z" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.818774 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx"] Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.950805 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27022163-5166-48e2-afc4-e984baa40303-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx\" (UID: \"27022163-5166-48e2-afc4-e984baa40303\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.950897 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlrx8\" (UniqueName: \"kubernetes.io/projected/27022163-5166-48e2-afc4-e984baa40303-kube-api-access-zlrx8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx\" (UID: \"27022163-5166-48e2-afc4-e984baa40303\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx" Jan 26 13:28:13 crc kubenswrapper[4844]: I0126 13:28:13.951284 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27022163-5166-48e2-afc4-e984baa40303-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx\" (UID: \"27022163-5166-48e2-afc4-e984baa40303\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx" Jan 26 13:28:14 crc kubenswrapper[4844]: I0126 13:28:14.053563 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27022163-5166-48e2-afc4-e984baa40303-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx\" (UID: \"27022163-5166-48e2-afc4-e984baa40303\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx" Jan 26 13:28:14 crc kubenswrapper[4844]: I0126 13:28:14.053919 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27022163-5166-48e2-afc4-e984baa40303-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx\" (UID: \"27022163-5166-48e2-afc4-e984baa40303\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx" Jan 26 13:28:14 crc kubenswrapper[4844]: I0126 13:28:14.054061 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlrx8\" (UniqueName: \"kubernetes.io/projected/27022163-5166-48e2-afc4-e984baa40303-kube-api-access-zlrx8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx\" (UID: \"27022163-5166-48e2-afc4-e984baa40303\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx" Jan 26 13:28:14 crc kubenswrapper[4844]: I0126 13:28:14.056786 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27022163-5166-48e2-afc4-e984baa40303-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx\" (UID: \"27022163-5166-48e2-afc4-e984baa40303\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx" Jan 26 13:28:14 crc kubenswrapper[4844]: I0126 13:28:14.060334 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27022163-5166-48e2-afc4-e984baa40303-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx\" (UID: \"27022163-5166-48e2-afc4-e984baa40303\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx" Jan 26 13:28:14 crc kubenswrapper[4844]: I0126 13:28:14.071537 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlrx8\" (UniqueName: \"kubernetes.io/projected/27022163-5166-48e2-afc4-e984baa40303-kube-api-access-zlrx8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx\" (UID: \"27022163-5166-48e2-afc4-e984baa40303\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx" Jan 26 13:28:14 crc kubenswrapper[4844]: I0126 13:28:14.127410 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx" Jan 26 13:28:14 crc kubenswrapper[4844]: I0126 13:28:14.619651 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx"] Jan 26 13:28:14 crc kubenswrapper[4844]: I0126 13:28:14.725680 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx" event={"ID":"27022163-5166-48e2-afc4-e984baa40303","Type":"ContainerStarted","Data":"223254dd980111602c6aeab038aa30d3b79ae73d6d120f2a8e98573323896ffc"} Jan 26 13:28:15 crc kubenswrapper[4844]: I0126 13:28:15.048537 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-467jd"] Jan 26 13:28:15 crc kubenswrapper[4844]: I0126 13:28:15.060370 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-467jd"] Jan 26 13:28:15 crc kubenswrapper[4844]: I0126 13:28:15.348382 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55" path="/var/lib/kubelet/pods/c21381d1-dcd0-4f0e-a4e1-ef54e9c3cc55/volumes" Jan 26 13:28:15 crc kubenswrapper[4844]: I0126 13:28:15.734657 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx" event={"ID":"27022163-5166-48e2-afc4-e984baa40303","Type":"ContainerStarted","Data":"a8320984348f6efcf22b671ac1af115de55ff6ab25b6517ddf30e63a5a182ab8"} Jan 26 13:28:15 crc kubenswrapper[4844]: I0126 13:28:15.758619 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx" podStartSLOduration=2.296348635 podStartE2EDuration="2.758583148s" podCreationTimestamp="2026-01-26 13:28:13 +0000 UTC" firstStartedPulling="2026-01-26 13:28:14.624421158 +0000 UTC m=+2671.557788770" lastFinishedPulling="2026-01-26 13:28:15.086655671 +0000 UTC m=+2672.020023283" observedRunningTime="2026-01-26 13:28:15.747994231 +0000 UTC m=+2672.681361843" watchObservedRunningTime="2026-01-26 13:28:15.758583148 +0000 UTC m=+2672.691950760" Jan 26 13:28:18 crc kubenswrapper[4844]: I0126 13:28:18.029335 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-7nthx"] Jan 26 13:28:18 crc kubenswrapper[4844]: I0126 13:28:18.038372 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-7nthx"] Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.054909 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-4339-account-create-update-lgkll"] Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.069762 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e0ab-account-create-update-d7qtp"] Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.077957 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-81e2-account-create-update-8bfjh"] Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.097079 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-4339-account-create-update-lgkll"] Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.112036 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-91b1-account-create-update-5b86b"] Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.120650 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-v22sn"] Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.128985 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-81e2-account-create-update-8bfjh"] Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.136790 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e0ab-account-create-update-d7qtp"] Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.146086 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-91b1-account-create-update-5b86b"] Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.155703 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-v22sn"] Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.165228 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-pgdkm"] Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.175470 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-pgdkm"] Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.407990 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="021fb8fd-810b-4042-adfd-6ce50bcacbf0" path="/var/lib/kubelet/pods/021fb8fd-810b-4042-adfd-6ce50bcacbf0/volumes" Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.409637 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2654c2cc-3479-4c0c-89e3-26ecfeedb613" path="/var/lib/kubelet/pods/2654c2cc-3479-4c0c-89e3-26ecfeedb613/volumes" Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.434538 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26c50a55-5ec7-41d8-a69a-607f0331039a" path="/var/lib/kubelet/pods/26c50a55-5ec7-41d8-a69a-607f0331039a/volumes" Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.436128 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0033ca5-7b7d-464e-ba26-a59ca8f226fe" path="/var/lib/kubelet/pods/c0033ca5-7b7d-464e-ba26-a59ca8f226fe/volumes" Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.437041 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e08e4d13-48d4-434c-a816-b64d161f09be" path="/var/lib/kubelet/pods/e08e4d13-48d4-434c-a816-b64d161f09be/volumes" Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.437964 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2eae26a-a2cb-4a25-b77c-021951cf33b3" path="/var/lib/kubelet/pods/e2eae26a-a2cb-4a25-b77c-021951cf33b3/volumes" Jan 26 13:28:19 crc kubenswrapper[4844]: I0126 13:28:19.439076 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbe5f771-2b02-4d1d-93bb-9e59aa3723ad" path="/var/lib/kubelet/pods/fbe5f771-2b02-4d1d-93bb-9e59aa3723ad/volumes" Jan 26 13:28:32 crc kubenswrapper[4844]: I0126 13:28:32.367375 4844 scope.go:117] "RemoveContainer" containerID="930b3b4675cfc68af0f2bc5357fa1c12aea62c99fd40c4fee09bcc2da4fdeb7d" Jan 26 13:28:32 crc kubenswrapper[4844]: I0126 13:28:32.399777 4844 scope.go:117] "RemoveContainer" containerID="6edfe5ab404bbe2c7e6e6c5bf1ae4235bf4c7059fc21e85b47e5e94d611ba096" Jan 26 13:28:32 crc kubenswrapper[4844]: I0126 13:28:32.459131 4844 scope.go:117] "RemoveContainer" containerID="93948a365f063de896fea97ba0e5d8a70050a44cced46c3ffc82e7d4a783412d" Jan 26 13:28:32 crc kubenswrapper[4844]: I0126 13:28:32.514530 4844 scope.go:117] "RemoveContainer" containerID="32dcbb0c5d0ec630a857a852e8c41f505f3ffbfb3033a0261aec48207394718c" Jan 26 13:28:32 crc kubenswrapper[4844]: I0126 13:28:32.556720 4844 scope.go:117] "RemoveContainer" containerID="8b04fcc51494e0b878c0902ec7083e55dd8b0a00193f973070a361bda6c60a24" Jan 26 13:28:32 crc kubenswrapper[4844]: I0126 13:28:32.614992 4844 scope.go:117] "RemoveContainer" containerID="0e7de2ceafa9c3a048c8d86a9129c054789c053516a3573e6315e0e7e971482e" Jan 26 13:28:32 crc kubenswrapper[4844]: I0126 13:28:32.666252 4844 scope.go:117] "RemoveContainer" containerID="f8a1ef6b46b0ad8c3cee0ccb59b771e7bce23387e86d33395e4ff38a1b5c67aa" Jan 26 13:28:32 crc kubenswrapper[4844]: I0126 13:28:32.695528 4844 scope.go:117] "RemoveContainer" containerID="8fa46a77ed651b1eb9404c5da7583979d0f8b9cf7c06b27dc98d255698e3464f" Jan 26 13:28:32 crc kubenswrapper[4844]: I0126 13:28:32.722152 4844 scope.go:117] "RemoveContainer" containerID="bb28fb2eb48fb25eb9f1f034eb6ce340e7baa01fb38d54643c04c2815f25b5b8" Jan 26 13:28:32 crc kubenswrapper[4844]: I0126 13:28:32.746865 4844 scope.go:117] "RemoveContainer" containerID="784225fe1aacf0f914c500b18da5c4ea54167172e85edc992bd755835d16030c" Jan 26 13:28:32 crc kubenswrapper[4844]: I0126 13:28:32.776149 4844 scope.go:117] "RemoveContainer" containerID="de8d0d169b4d697da01169da87bb3a5a63f75c2051cadb542947fec8e02cbfa5" Jan 26 13:28:32 crc kubenswrapper[4844]: I0126 13:28:32.807303 4844 scope.go:117] "RemoveContainer" containerID="284d132795c468d96b362fb5e87efe1c64b1f7c4020b2ee50a2f4545f7862208" Jan 26 13:28:32 crc kubenswrapper[4844]: I0126 13:28:32.834512 4844 scope.go:117] "RemoveContainer" containerID="6017853340d8abcef8840a5f2b6a4e39e10b2e8269431cf428633fe0ebeb52f6" Jan 26 13:28:32 crc kubenswrapper[4844]: I0126 13:28:32.865186 4844 scope.go:117] "RemoveContainer" containerID="4bb5edb2d0e964fbcd8f310bd7609872e6ca5523f7e8900cb617a0f7b8254f07" Jan 26 13:28:32 crc kubenswrapper[4844]: I0126 13:28:32.890626 4844 scope.go:117] "RemoveContainer" containerID="3cdf1b00e1f4d43c8d3ed116513dd344d1931a3d900971060ca69f659e05ce90" Jan 26 13:28:36 crc kubenswrapper[4844]: I0126 13:28:36.364920 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:28:36 crc kubenswrapper[4844]: I0126 13:28:36.365454 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:28:46 crc kubenswrapper[4844]: I0126 13:28:46.057087 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-td22t"] Jan 26 13:28:46 crc kubenswrapper[4844]: I0126 13:28:46.071975 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-td22t"] Jan 26 13:28:47 crc kubenswrapper[4844]: I0126 13:28:47.667161 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ca9f483-dabf-40a9-be25-312db82ffd23" path="/var/lib/kubelet/pods/0ca9f483-dabf-40a9-be25-312db82ffd23/volumes" Jan 26 13:29:06 crc kubenswrapper[4844]: I0126 13:29:06.365061 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:29:06 crc kubenswrapper[4844]: I0126 13:29:06.365637 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:29:31 crc kubenswrapper[4844]: I0126 13:29:31.069531 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-5w9q7"] Jan 26 13:29:31 crc kubenswrapper[4844]: I0126 13:29:31.080742 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-5w9q7"] Jan 26 13:29:31 crc kubenswrapper[4844]: I0126 13:29:31.330236 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db436f05-9b6d-4342-82d0-524c18fe6079" path="/var/lib/kubelet/pods/db436f05-9b6d-4342-82d0-524c18fe6079/volumes" Jan 26 13:29:33 crc kubenswrapper[4844]: I0126 13:29:33.276892 4844 scope.go:117] "RemoveContainer" containerID="3cc359ea290c4a1ba3f4026d4e28a17fb1253b3a06d9147e6b61b211af705ac2" Jan 26 13:29:33 crc kubenswrapper[4844]: I0126 13:29:33.311697 4844 scope.go:117] "RemoveContainer" containerID="fdabb2e88956090cbd85f11661adfe1cbcfa07e35d6820a13b09795455864443" Jan 26 13:29:35 crc kubenswrapper[4844]: I0126 13:29:35.918312 4844 scope.go:117] "RemoveContainer" containerID="435a540d0a169e47db4e9ee371b75ef04e541d1c3989937dc65c7d0d5c99f2fb" Jan 26 13:29:35 crc kubenswrapper[4844]: I0126 13:29:35.958296 4844 scope.go:117] "RemoveContainer" containerID="b01bde1b77e6b4012bd36c236ff5cf164902b763ff25a61357efefa4c71f214c" Jan 26 13:29:36 crc kubenswrapper[4844]: I0126 13:29:36.364952 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:29:36 crc kubenswrapper[4844]: I0126 13:29:36.365019 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:29:36 crc kubenswrapper[4844]: I0126 13:29:36.365085 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 13:29:36 crc kubenswrapper[4844]: I0126 13:29:36.366022 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a82b801a0f9019b696e73b93e7bd511e023d38ac840f413770a1b3ad588c4466"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:29:36 crc kubenswrapper[4844]: I0126 13:29:36.366101 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://a82b801a0f9019b696e73b93e7bd511e023d38ac840f413770a1b3ad588c4466" gracePeriod=600 Jan 26 13:29:37 crc kubenswrapper[4844]: I0126 13:29:37.203899 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="a82b801a0f9019b696e73b93e7bd511e023d38ac840f413770a1b3ad588c4466" exitCode=0 Jan 26 13:29:37 crc kubenswrapper[4844]: I0126 13:29:37.203969 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"a82b801a0f9019b696e73b93e7bd511e023d38ac840f413770a1b3ad588c4466"} Jan 26 13:29:37 crc kubenswrapper[4844]: I0126 13:29:37.204752 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e"} Jan 26 13:29:37 crc kubenswrapper[4844]: I0126 13:29:37.204781 4844 scope.go:117] "RemoveContainer" containerID="003e8a783231d0610c61a79899be5429104525b8053b82bc16d011d8b1eff87d" Jan 26 13:29:43 crc kubenswrapper[4844]: I0126 13:29:43.055193 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ln4pq"] Jan 26 13:29:43 crc kubenswrapper[4844]: I0126 13:29:43.075042 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ln4pq"] Jan 26 13:29:43 crc kubenswrapper[4844]: I0126 13:29:43.326889 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef403703-395e-4db1-a9f5-a8e011e39ff2" path="/var/lib/kubelet/pods/ef403703-395e-4db1-a9f5-a8e011e39ff2/volumes" Jan 26 13:29:45 crc kubenswrapper[4844]: I0126 13:29:45.043885 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-bt68v"] Jan 26 13:29:45 crc kubenswrapper[4844]: I0126 13:29:45.055555 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-bt68v"] Jan 26 13:29:45 crc kubenswrapper[4844]: I0126 13:29:45.332935 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="847c2c6b-16a5-4c1d-9122-81accf513fb4" path="/var/lib/kubelet/pods/847c2c6b-16a5-4c1d-9122-81accf513fb4/volumes" Jan 26 13:29:48 crc kubenswrapper[4844]: I0126 13:29:48.038699 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-2xnzf"] Jan 26 13:29:48 crc kubenswrapper[4844]: I0126 13:29:48.049720 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-2xnzf"] Jan 26 13:29:49 crc kubenswrapper[4844]: I0126 13:29:49.324922 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43fe5130-0714-4f40-9d6a-9384eb72fa0a" path="/var/lib/kubelet/pods/43fe5130-0714-4f40-9d6a-9384eb72fa0a/volumes" Jan 26 13:29:55 crc kubenswrapper[4844]: I0126 13:29:55.487102 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qsmht"] Jan 26 13:29:55 crc kubenswrapper[4844]: I0126 13:29:55.494514 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsmht" Jan 26 13:29:55 crc kubenswrapper[4844]: I0126 13:29:55.506221 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsmht"] Jan 26 13:29:55 crc kubenswrapper[4844]: I0126 13:29:55.619064 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d-utilities\") pod \"redhat-marketplace-qsmht\" (UID: \"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d\") " pod="openshift-marketplace/redhat-marketplace-qsmht" Jan 26 13:29:55 crc kubenswrapper[4844]: I0126 13:29:55.619140 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbzqf\" (UniqueName: \"kubernetes.io/projected/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d-kube-api-access-zbzqf\") pod \"redhat-marketplace-qsmht\" (UID: \"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d\") " pod="openshift-marketplace/redhat-marketplace-qsmht" Jan 26 13:29:55 crc kubenswrapper[4844]: I0126 13:29:55.619198 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d-catalog-content\") pod \"redhat-marketplace-qsmht\" (UID: \"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d\") " pod="openshift-marketplace/redhat-marketplace-qsmht" Jan 26 13:29:55 crc kubenswrapper[4844]: I0126 13:29:55.720311 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d-utilities\") pod \"redhat-marketplace-qsmht\" (UID: \"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d\") " pod="openshift-marketplace/redhat-marketplace-qsmht" Jan 26 13:29:55 crc kubenswrapper[4844]: I0126 13:29:55.720372 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbzqf\" (UniqueName: \"kubernetes.io/projected/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d-kube-api-access-zbzqf\") pod \"redhat-marketplace-qsmht\" (UID: \"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d\") " pod="openshift-marketplace/redhat-marketplace-qsmht" Jan 26 13:29:55 crc kubenswrapper[4844]: I0126 13:29:55.720421 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d-catalog-content\") pod \"redhat-marketplace-qsmht\" (UID: \"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d\") " pod="openshift-marketplace/redhat-marketplace-qsmht" Jan 26 13:29:55 crc kubenswrapper[4844]: I0126 13:29:55.721123 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d-catalog-content\") pod \"redhat-marketplace-qsmht\" (UID: \"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d\") " pod="openshift-marketplace/redhat-marketplace-qsmht" Jan 26 13:29:55 crc kubenswrapper[4844]: I0126 13:29:55.721234 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d-utilities\") pod \"redhat-marketplace-qsmht\" (UID: \"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d\") " pod="openshift-marketplace/redhat-marketplace-qsmht" Jan 26 13:29:55 crc kubenswrapper[4844]: I0126 13:29:55.738671 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbzqf\" (UniqueName: \"kubernetes.io/projected/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d-kube-api-access-zbzqf\") pod \"redhat-marketplace-qsmht\" (UID: \"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d\") " pod="openshift-marketplace/redhat-marketplace-qsmht" Jan 26 13:29:55 crc kubenswrapper[4844]: I0126 13:29:55.824902 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsmht" Jan 26 13:29:56 crc kubenswrapper[4844]: I0126 13:29:56.312583 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsmht"] Jan 26 13:29:56 crc kubenswrapper[4844]: W0126 13:29:56.313802 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8f8a55a_3a35_4983_9d60_7cb1c8ebae5d.slice/crio-5dea2df916e543f64f8991803cfbd0c808417dc6c91158b3128921bf1bd7b9b7 WatchSource:0}: Error finding container 5dea2df916e543f64f8991803cfbd0c808417dc6c91158b3128921bf1bd7b9b7: Status 404 returned error can't find the container with id 5dea2df916e543f64f8991803cfbd0c808417dc6c91158b3128921bf1bd7b9b7 Jan 26 13:29:56 crc kubenswrapper[4844]: I0126 13:29:56.392694 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsmht" event={"ID":"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d","Type":"ContainerStarted","Data":"5dea2df916e543f64f8991803cfbd0c808417dc6c91158b3128921bf1bd7b9b7"} Jan 26 13:29:57 crc kubenswrapper[4844]: I0126 13:29:57.406410 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsmht" event={"ID":"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d","Type":"ContainerStarted","Data":"d5a8b297b154d2345e09d9d22669ce019ca682f8fa2c1eab3906f3582c081ba8"} Jan 26 13:29:58 crc kubenswrapper[4844]: I0126 13:29:58.050464 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-dcfgm"] Jan 26 13:29:58 crc kubenswrapper[4844]: I0126 13:29:58.070502 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-dcfgm"] Jan 26 13:29:59 crc kubenswrapper[4844]: I0126 13:29:59.065568 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-q74n8"] Jan 26 13:29:59 crc kubenswrapper[4844]: I0126 13:29:59.084666 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-q74n8"] Jan 26 13:29:59 crc kubenswrapper[4844]: I0126 13:29:59.330995 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bdef7de-9499-45b9-b41e-a59882aa4423" path="/var/lib/kubelet/pods/4bdef7de-9499-45b9-b41e-a59882aa4423/volumes" Jan 26 13:29:59 crc kubenswrapper[4844]: I0126 13:29:59.332178 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f82260f-cde4-4197-8718-d7adebadeddb" path="/var/lib/kubelet/pods/5f82260f-cde4-4197-8718-d7adebadeddb/volumes" Jan 26 13:29:59 crc kubenswrapper[4844]: I0126 13:29:59.431625 4844 generic.go:334] "Generic (PLEG): container finished" podID="e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d" containerID="d5a8b297b154d2345e09d9d22669ce019ca682f8fa2c1eab3906f3582c081ba8" exitCode=0 Jan 26 13:29:59 crc kubenswrapper[4844]: I0126 13:29:59.431677 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsmht" event={"ID":"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d","Type":"ContainerDied","Data":"d5a8b297b154d2345e09d9d22669ce019ca682f8fa2c1eab3906f3582c081ba8"} Jan 26 13:30:00 crc kubenswrapper[4844]: I0126 13:30:00.158784 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj"] Jan 26 13:30:00 crc kubenswrapper[4844]: I0126 13:30:00.160388 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj" Jan 26 13:30:00 crc kubenswrapper[4844]: I0126 13:30:00.163998 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 13:30:00 crc kubenswrapper[4844]: I0126 13:30:00.165431 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 13:30:00 crc kubenswrapper[4844]: I0126 13:30:00.167483 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj"] Jan 26 13:30:00 crc kubenswrapper[4844]: I0126 13:30:00.348728 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/72d96d87-2177-4714-8ca6-e9e4f4192f3b-secret-volume\") pod \"collect-profiles-29490570-jn8qj\" (UID: \"72d96d87-2177-4714-8ca6-e9e4f4192f3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj" Jan 26 13:30:00 crc kubenswrapper[4844]: I0126 13:30:00.349107 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t54cw\" (UniqueName: \"kubernetes.io/projected/72d96d87-2177-4714-8ca6-e9e4f4192f3b-kube-api-access-t54cw\") pod \"collect-profiles-29490570-jn8qj\" (UID: \"72d96d87-2177-4714-8ca6-e9e4f4192f3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj" Jan 26 13:30:00 crc kubenswrapper[4844]: I0126 13:30:00.349157 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72d96d87-2177-4714-8ca6-e9e4f4192f3b-config-volume\") pod \"collect-profiles-29490570-jn8qj\" (UID: \"72d96d87-2177-4714-8ca6-e9e4f4192f3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj" Jan 26 13:30:00 crc kubenswrapper[4844]: I0126 13:30:00.450747 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/72d96d87-2177-4714-8ca6-e9e4f4192f3b-secret-volume\") pod \"collect-profiles-29490570-jn8qj\" (UID: \"72d96d87-2177-4714-8ca6-e9e4f4192f3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj" Jan 26 13:30:00 crc kubenswrapper[4844]: I0126 13:30:00.450817 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t54cw\" (UniqueName: \"kubernetes.io/projected/72d96d87-2177-4714-8ca6-e9e4f4192f3b-kube-api-access-t54cw\") pod \"collect-profiles-29490570-jn8qj\" (UID: \"72d96d87-2177-4714-8ca6-e9e4f4192f3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj" Jan 26 13:30:00 crc kubenswrapper[4844]: I0126 13:30:00.450860 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72d96d87-2177-4714-8ca6-e9e4f4192f3b-config-volume\") pod \"collect-profiles-29490570-jn8qj\" (UID: \"72d96d87-2177-4714-8ca6-e9e4f4192f3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj" Jan 26 13:30:00 crc kubenswrapper[4844]: I0126 13:30:00.452551 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72d96d87-2177-4714-8ca6-e9e4f4192f3b-config-volume\") pod \"collect-profiles-29490570-jn8qj\" (UID: \"72d96d87-2177-4714-8ca6-e9e4f4192f3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj" Jan 26 13:30:00 crc kubenswrapper[4844]: I0126 13:30:00.461226 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/72d96d87-2177-4714-8ca6-e9e4f4192f3b-secret-volume\") pod \"collect-profiles-29490570-jn8qj\" (UID: \"72d96d87-2177-4714-8ca6-e9e4f4192f3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj" Jan 26 13:30:00 crc kubenswrapper[4844]: I0126 13:30:00.470254 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t54cw\" (UniqueName: \"kubernetes.io/projected/72d96d87-2177-4714-8ca6-e9e4f4192f3b-kube-api-access-t54cw\") pod \"collect-profiles-29490570-jn8qj\" (UID: \"72d96d87-2177-4714-8ca6-e9e4f4192f3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj" Jan 26 13:30:00 crc kubenswrapper[4844]: I0126 13:30:00.488533 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj" Jan 26 13:30:00 crc kubenswrapper[4844]: W0126 13:30:00.995573 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72d96d87_2177_4714_8ca6_e9e4f4192f3b.slice/crio-f07ce6ae38500b526c826c310c6629c76e2f4d467f6ddc226c3818283d17ff39 WatchSource:0}: Error finding container f07ce6ae38500b526c826c310c6629c76e2f4d467f6ddc226c3818283d17ff39: Status 404 returned error can't find the container with id f07ce6ae38500b526c826c310c6629c76e2f4d467f6ddc226c3818283d17ff39 Jan 26 13:30:00 crc kubenswrapper[4844]: I0126 13:30:00.996290 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj"] Jan 26 13:30:01 crc kubenswrapper[4844]: I0126 13:30:01.453312 4844 generic.go:334] "Generic (PLEG): container finished" podID="72d96d87-2177-4714-8ca6-e9e4f4192f3b" containerID="01fcb8f1b34b695ddc0e349c4093834025a5fa9a9b9c2aa13f5cbdd436b18671" exitCode=0 Jan 26 13:30:01 crc kubenswrapper[4844]: I0126 13:30:01.453434 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj" event={"ID":"72d96d87-2177-4714-8ca6-e9e4f4192f3b","Type":"ContainerDied","Data":"01fcb8f1b34b695ddc0e349c4093834025a5fa9a9b9c2aa13f5cbdd436b18671"} Jan 26 13:30:01 crc kubenswrapper[4844]: I0126 13:30:01.453840 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj" event={"ID":"72d96d87-2177-4714-8ca6-e9e4f4192f3b","Type":"ContainerStarted","Data":"f07ce6ae38500b526c826c310c6629c76e2f4d467f6ddc226c3818283d17ff39"} Jan 26 13:30:01 crc kubenswrapper[4844]: I0126 13:30:01.456147 4844 generic.go:334] "Generic (PLEG): container finished" podID="e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d" containerID="47ebbedfbe4f15550bc0f716cc364c7ae9d313b298c002fb5fca4e20a14abdfd" exitCode=0 Jan 26 13:30:01 crc kubenswrapper[4844]: I0126 13:30:01.456198 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsmht" event={"ID":"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d","Type":"ContainerDied","Data":"47ebbedfbe4f15550bc0f716cc364c7ae9d313b298c002fb5fca4e20a14abdfd"} Jan 26 13:30:01 crc kubenswrapper[4844]: E0126 13:30:01.538085 4844 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72d96d87_2177_4714_8ca6_e9e4f4192f3b.slice/crio-01fcb8f1b34b695ddc0e349c4093834025a5fa9a9b9c2aa13f5cbdd436b18671.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72d96d87_2177_4714_8ca6_e9e4f4192f3b.slice/crio-conmon-01fcb8f1b34b695ddc0e349c4093834025a5fa9a9b9c2aa13f5cbdd436b18671.scope\": RecentStats: unable to find data in memory cache]" Jan 26 13:30:02 crc kubenswrapper[4844]: I0126 13:30:02.895422 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj" Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.011948 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t54cw\" (UniqueName: \"kubernetes.io/projected/72d96d87-2177-4714-8ca6-e9e4f4192f3b-kube-api-access-t54cw\") pod \"72d96d87-2177-4714-8ca6-e9e4f4192f3b\" (UID: \"72d96d87-2177-4714-8ca6-e9e4f4192f3b\") " Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.012103 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/72d96d87-2177-4714-8ca6-e9e4f4192f3b-secret-volume\") pod \"72d96d87-2177-4714-8ca6-e9e4f4192f3b\" (UID: \"72d96d87-2177-4714-8ca6-e9e4f4192f3b\") " Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.012226 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72d96d87-2177-4714-8ca6-e9e4f4192f3b-config-volume\") pod \"72d96d87-2177-4714-8ca6-e9e4f4192f3b\" (UID: \"72d96d87-2177-4714-8ca6-e9e4f4192f3b\") " Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.013269 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72d96d87-2177-4714-8ca6-e9e4f4192f3b-config-volume" (OuterVolumeSpecName: "config-volume") pod "72d96d87-2177-4714-8ca6-e9e4f4192f3b" (UID: "72d96d87-2177-4714-8ca6-e9e4f4192f3b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.021290 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d96d87-2177-4714-8ca6-e9e4f4192f3b-kube-api-access-t54cw" (OuterVolumeSpecName: "kube-api-access-t54cw") pod "72d96d87-2177-4714-8ca6-e9e4f4192f3b" (UID: "72d96d87-2177-4714-8ca6-e9e4f4192f3b"). InnerVolumeSpecName "kube-api-access-t54cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.022380 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d96d87-2177-4714-8ca6-e9e4f4192f3b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "72d96d87-2177-4714-8ca6-e9e4f4192f3b" (UID: "72d96d87-2177-4714-8ca6-e9e4f4192f3b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.029714 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-9jq8s"] Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.043131 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-9jq8s"] Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.114659 4844 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/72d96d87-2177-4714-8ca6-e9e4f4192f3b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.114701 4844 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72d96d87-2177-4714-8ca6-e9e4f4192f3b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.114715 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t54cw\" (UniqueName: \"kubernetes.io/projected/72d96d87-2177-4714-8ca6-e9e4f4192f3b-kube-api-access-t54cw\") on node \"crc\" DevicePath \"\"" Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.330426 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce0ed764-c6f0-4580-89dd-4f6826df258d" path="/var/lib/kubelet/pods/ce0ed764-c6f0-4580-89dd-4f6826df258d/volumes" Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.483508 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj" event={"ID":"72d96d87-2177-4714-8ca6-e9e4f4192f3b","Type":"ContainerDied","Data":"f07ce6ae38500b526c826c310c6629c76e2f4d467f6ddc226c3818283d17ff39"} Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.483549 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj" Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.483574 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f07ce6ae38500b526c826c310c6629c76e2f4d467f6ddc226c3818283d17ff39" Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.488429 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsmht" event={"ID":"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d","Type":"ContainerStarted","Data":"f75338bf56ad26738c5d111ab612725b68b3a1bcdb2ed41b57b539b648c2b6bf"} Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.511718 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qsmht" podStartSLOduration=5.419730641 podStartE2EDuration="8.511702558s" podCreationTimestamp="2026-01-26 13:29:55 +0000 UTC" firstStartedPulling="2026-01-26 13:29:59.435738064 +0000 UTC m=+2776.369105676" lastFinishedPulling="2026-01-26 13:30:02.527709981 +0000 UTC m=+2779.461077593" observedRunningTime="2026-01-26 13:30:03.50684532 +0000 UTC m=+2780.440212952" watchObservedRunningTime="2026-01-26 13:30:03.511702558 +0000 UTC m=+2780.445070170" Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.971472 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl"] Jan 26 13:30:03 crc kubenswrapper[4844]: I0126 13:30:03.983168 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490525-mqbpl"] Jan 26 13:30:05 crc kubenswrapper[4844]: I0126 13:30:05.327412 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b95a697-eeb9-444d-83ed-3484a41f5dd1" path="/var/lib/kubelet/pods/0b95a697-eeb9-444d-83ed-3484a41f5dd1/volumes" Jan 26 13:30:05 crc kubenswrapper[4844]: I0126 13:30:05.825465 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qsmht" Jan 26 13:30:05 crc kubenswrapper[4844]: I0126 13:30:05.825821 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qsmht" Jan 26 13:30:05 crc kubenswrapper[4844]: I0126 13:30:05.891414 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qsmht" Jan 26 13:30:07 crc kubenswrapper[4844]: I0126 13:30:07.539747 4844 generic.go:334] "Generic (PLEG): container finished" podID="27022163-5166-48e2-afc4-e984baa40303" containerID="a8320984348f6efcf22b671ac1af115de55ff6ab25b6517ddf30e63a5a182ab8" exitCode=0 Jan 26 13:30:07 crc kubenswrapper[4844]: I0126 13:30:07.539823 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx" event={"ID":"27022163-5166-48e2-afc4-e984baa40303","Type":"ContainerDied","Data":"a8320984348f6efcf22b671ac1af115de55ff6ab25b6517ddf30e63a5a182ab8"} Jan 26 13:30:07 crc kubenswrapper[4844]: I0126 13:30:07.630438 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qsmht" Jan 26 13:30:07 crc kubenswrapper[4844]: I0126 13:30:07.707989 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsmht"] Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.062861 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.140294 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27022163-5166-48e2-afc4-e984baa40303-ssh-key-openstack-edpm-ipam\") pod \"27022163-5166-48e2-afc4-e984baa40303\" (UID: \"27022163-5166-48e2-afc4-e984baa40303\") " Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.140365 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlrx8\" (UniqueName: \"kubernetes.io/projected/27022163-5166-48e2-afc4-e984baa40303-kube-api-access-zlrx8\") pod \"27022163-5166-48e2-afc4-e984baa40303\" (UID: \"27022163-5166-48e2-afc4-e984baa40303\") " Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.140399 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27022163-5166-48e2-afc4-e984baa40303-inventory\") pod \"27022163-5166-48e2-afc4-e984baa40303\" (UID: \"27022163-5166-48e2-afc4-e984baa40303\") " Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.151013 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27022163-5166-48e2-afc4-e984baa40303-kube-api-access-zlrx8" (OuterVolumeSpecName: "kube-api-access-zlrx8") pod "27022163-5166-48e2-afc4-e984baa40303" (UID: "27022163-5166-48e2-afc4-e984baa40303"). InnerVolumeSpecName "kube-api-access-zlrx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.170381 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27022163-5166-48e2-afc4-e984baa40303-inventory" (OuterVolumeSpecName: "inventory") pod "27022163-5166-48e2-afc4-e984baa40303" (UID: "27022163-5166-48e2-afc4-e984baa40303"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.188548 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27022163-5166-48e2-afc4-e984baa40303-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "27022163-5166-48e2-afc4-e984baa40303" (UID: "27022163-5166-48e2-afc4-e984baa40303"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.242584 4844 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27022163-5166-48e2-afc4-e984baa40303-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.242635 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlrx8\" (UniqueName: \"kubernetes.io/projected/27022163-5166-48e2-afc4-e984baa40303-kube-api-access-zlrx8\") on node \"crc\" DevicePath \"\"" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.242647 4844 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27022163-5166-48e2-afc4-e984baa40303-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.563066 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx" event={"ID":"27022163-5166-48e2-afc4-e984baa40303","Type":"ContainerDied","Data":"223254dd980111602c6aeab038aa30d3b79ae73d6d120f2a8e98573323896ffc"} Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.563124 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="223254dd980111602c6aeab038aa30d3b79ae73d6d120f2a8e98573323896ffc" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.563092 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.563224 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qsmht" podUID="e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d" containerName="registry-server" containerID="cri-o://f75338bf56ad26738c5d111ab612725b68b3a1bcdb2ed41b57b539b648c2b6bf" gracePeriod=2 Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.649943 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh"] Jan 26 13:30:09 crc kubenswrapper[4844]: E0126 13:30:09.650409 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d96d87-2177-4714-8ca6-e9e4f4192f3b" containerName="collect-profiles" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.650427 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d96d87-2177-4714-8ca6-e9e4f4192f3b" containerName="collect-profiles" Jan 26 13:30:09 crc kubenswrapper[4844]: E0126 13:30:09.650444 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27022163-5166-48e2-afc4-e984baa40303" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.650451 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="27022163-5166-48e2-afc4-e984baa40303" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.650675 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d96d87-2177-4714-8ca6-e9e4f4192f3b" containerName="collect-profiles" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.650697 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="27022163-5166-48e2-afc4-e984baa40303" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.651382 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.654364 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.654571 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.654705 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r4j2z" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.654893 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.661737 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh"] Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.754901 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/174270d5-d84e-4b4c-8602-31e455da67db-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh\" (UID: \"174270d5-d84e-4b4c-8602-31e455da67db\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.755029 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/174270d5-d84e-4b4c-8602-31e455da67db-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh\" (UID: \"174270d5-d84e-4b4c-8602-31e455da67db\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.755169 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6lk5\" (UniqueName: \"kubernetes.io/projected/174270d5-d84e-4b4c-8602-31e455da67db-kube-api-access-f6lk5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh\" (UID: \"174270d5-d84e-4b4c-8602-31e455da67db\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.856586 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6lk5\" (UniqueName: \"kubernetes.io/projected/174270d5-d84e-4b4c-8602-31e455da67db-kube-api-access-f6lk5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh\" (UID: \"174270d5-d84e-4b4c-8602-31e455da67db\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.857031 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/174270d5-d84e-4b4c-8602-31e455da67db-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh\" (UID: \"174270d5-d84e-4b4c-8602-31e455da67db\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.857115 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/174270d5-d84e-4b4c-8602-31e455da67db-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh\" (UID: \"174270d5-d84e-4b4c-8602-31e455da67db\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.865377 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/174270d5-d84e-4b4c-8602-31e455da67db-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh\" (UID: \"174270d5-d84e-4b4c-8602-31e455da67db\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.867300 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/174270d5-d84e-4b4c-8602-31e455da67db-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh\" (UID: \"174270d5-d84e-4b4c-8602-31e455da67db\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh" Jan 26 13:30:09 crc kubenswrapper[4844]: I0126 13:30:09.874548 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6lk5\" (UniqueName: \"kubernetes.io/projected/174270d5-d84e-4b4c-8602-31e455da67db-kube-api-access-f6lk5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh\" (UID: \"174270d5-d84e-4b4c-8602-31e455da67db\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh" Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.038444 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh" Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.053204 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsmht" Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.165583 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d-utilities\") pod \"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d\" (UID: \"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d\") " Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.165904 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbzqf\" (UniqueName: \"kubernetes.io/projected/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d-kube-api-access-zbzqf\") pod \"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d\" (UID: \"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d\") " Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.165942 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d-catalog-content\") pod \"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d\" (UID: \"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d\") " Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.168788 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d-utilities" (OuterVolumeSpecName: "utilities") pod "e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d" (UID: "e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.172859 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d-kube-api-access-zbzqf" (OuterVolumeSpecName: "kube-api-access-zbzqf") pod "e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d" (UID: "e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d"). InnerVolumeSpecName "kube-api-access-zbzqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.202310 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d" (UID: "e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.269046 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.269088 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbzqf\" (UniqueName: \"kubernetes.io/projected/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d-kube-api-access-zbzqf\") on node \"crc\" DevicePath \"\"" Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.269102 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.584076 4844 generic.go:334] "Generic (PLEG): container finished" podID="e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d" containerID="f75338bf56ad26738c5d111ab612725b68b3a1bcdb2ed41b57b539b648c2b6bf" exitCode=0 Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.584126 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsmht" event={"ID":"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d","Type":"ContainerDied","Data":"f75338bf56ad26738c5d111ab612725b68b3a1bcdb2ed41b57b539b648c2b6bf"} Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.584158 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsmht" event={"ID":"e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d","Type":"ContainerDied","Data":"5dea2df916e543f64f8991803cfbd0c808417dc6c91158b3128921bf1bd7b9b7"} Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.584179 4844 scope.go:117] "RemoveContainer" containerID="f75338bf56ad26738c5d111ab612725b68b3a1bcdb2ed41b57b539b648c2b6bf" Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.584393 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsmht" Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.601332 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh"] Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.622272 4844 scope.go:117] "RemoveContainer" containerID="47ebbedfbe4f15550bc0f716cc364c7ae9d313b298c002fb5fca4e20a14abdfd" Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.646766 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsmht"] Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.674141 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsmht"] Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.676025 4844 scope.go:117] "RemoveContainer" containerID="d5a8b297b154d2345e09d9d22669ce019ca682f8fa2c1eab3906f3582c081ba8" Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.701648 4844 scope.go:117] "RemoveContainer" containerID="f75338bf56ad26738c5d111ab612725b68b3a1bcdb2ed41b57b539b648c2b6bf" Jan 26 13:30:10 crc kubenswrapper[4844]: E0126 13:30:10.702272 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f75338bf56ad26738c5d111ab612725b68b3a1bcdb2ed41b57b539b648c2b6bf\": container with ID starting with f75338bf56ad26738c5d111ab612725b68b3a1bcdb2ed41b57b539b648c2b6bf not found: ID does not exist" containerID="f75338bf56ad26738c5d111ab612725b68b3a1bcdb2ed41b57b539b648c2b6bf" Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.702346 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f75338bf56ad26738c5d111ab612725b68b3a1bcdb2ed41b57b539b648c2b6bf"} err="failed to get container status \"f75338bf56ad26738c5d111ab612725b68b3a1bcdb2ed41b57b539b648c2b6bf\": rpc error: code = NotFound desc = could not find container \"f75338bf56ad26738c5d111ab612725b68b3a1bcdb2ed41b57b539b648c2b6bf\": container with ID starting with f75338bf56ad26738c5d111ab612725b68b3a1bcdb2ed41b57b539b648c2b6bf not found: ID does not exist" Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.702388 4844 scope.go:117] "RemoveContainer" containerID="47ebbedfbe4f15550bc0f716cc364c7ae9d313b298c002fb5fca4e20a14abdfd" Jan 26 13:30:10 crc kubenswrapper[4844]: E0126 13:30:10.703026 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47ebbedfbe4f15550bc0f716cc364c7ae9d313b298c002fb5fca4e20a14abdfd\": container with ID starting with 47ebbedfbe4f15550bc0f716cc364c7ae9d313b298c002fb5fca4e20a14abdfd not found: ID does not exist" containerID="47ebbedfbe4f15550bc0f716cc364c7ae9d313b298c002fb5fca4e20a14abdfd" Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.703080 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47ebbedfbe4f15550bc0f716cc364c7ae9d313b298c002fb5fca4e20a14abdfd"} err="failed to get container status \"47ebbedfbe4f15550bc0f716cc364c7ae9d313b298c002fb5fca4e20a14abdfd\": rpc error: code = NotFound desc = could not find container \"47ebbedfbe4f15550bc0f716cc364c7ae9d313b298c002fb5fca4e20a14abdfd\": container with ID starting with 47ebbedfbe4f15550bc0f716cc364c7ae9d313b298c002fb5fca4e20a14abdfd not found: ID does not exist" Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.703120 4844 scope.go:117] "RemoveContainer" containerID="d5a8b297b154d2345e09d9d22669ce019ca682f8fa2c1eab3906f3582c081ba8" Jan 26 13:30:10 crc kubenswrapper[4844]: E0126 13:30:10.703678 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5a8b297b154d2345e09d9d22669ce019ca682f8fa2c1eab3906f3582c081ba8\": container with ID starting with d5a8b297b154d2345e09d9d22669ce019ca682f8fa2c1eab3906f3582c081ba8 not found: ID does not exist" containerID="d5a8b297b154d2345e09d9d22669ce019ca682f8fa2c1eab3906f3582c081ba8" Jan 26 13:30:10 crc kubenswrapper[4844]: I0126 13:30:10.703707 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5a8b297b154d2345e09d9d22669ce019ca682f8fa2c1eab3906f3582c081ba8"} err="failed to get container status \"d5a8b297b154d2345e09d9d22669ce019ca682f8fa2c1eab3906f3582c081ba8\": rpc error: code = NotFound desc = could not find container \"d5a8b297b154d2345e09d9d22669ce019ca682f8fa2c1eab3906f3582c081ba8\": container with ID starting with d5a8b297b154d2345e09d9d22669ce019ca682f8fa2c1eab3906f3582c081ba8 not found: ID does not exist" Jan 26 13:30:11 crc kubenswrapper[4844]: I0126 13:30:11.335710 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d" path="/var/lib/kubelet/pods/e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d/volumes" Jan 26 13:30:11 crc kubenswrapper[4844]: I0126 13:30:11.602093 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh" event={"ID":"174270d5-d84e-4b4c-8602-31e455da67db","Type":"ContainerStarted","Data":"d584ff5752c8f2b5f4a4b7a70ef1cd6654fccb4a3e1dff06132ec5725c642a74"} Jan 26 13:30:11 crc kubenswrapper[4844]: I0126 13:30:11.602447 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh" event={"ID":"174270d5-d84e-4b4c-8602-31e455da67db","Type":"ContainerStarted","Data":"faa9ee3595e5a3cbc3b1e16054b876f74c1803967a814a081f1bc2ae68f48a58"} Jan 26 13:30:11 crc kubenswrapper[4844]: I0126 13:30:11.633847 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh" podStartSLOduration=1.888831602 podStartE2EDuration="2.633817786s" podCreationTimestamp="2026-01-26 13:30:09 +0000 UTC" firstStartedPulling="2026-01-26 13:30:10.599111582 +0000 UTC m=+2787.532479224" lastFinishedPulling="2026-01-26 13:30:11.344097786 +0000 UTC m=+2788.277465408" observedRunningTime="2026-01-26 13:30:11.622709327 +0000 UTC m=+2788.556076969" watchObservedRunningTime="2026-01-26 13:30:11.633817786 +0000 UTC m=+2788.567185418" Jan 26 13:30:36 crc kubenswrapper[4844]: I0126 13:30:36.135424 4844 scope.go:117] "RemoveContainer" containerID="61e9961bff931182a8012ad8856adbf430f38dc7f5ddea2b78bd38ec3bc96a2b" Jan 26 13:30:36 crc kubenswrapper[4844]: I0126 13:30:36.176284 4844 scope.go:117] "RemoveContainer" containerID="2949d309e80d3a15df54de2b1eef2a3f1d14c1d816a1ac2a78e45f1b801c0ae9" Jan 26 13:30:36 crc kubenswrapper[4844]: I0126 13:30:36.236311 4844 scope.go:117] "RemoveContainer" containerID="e46349bcce0b54334384e3d03bad2749ab306c1b6ca6446909a73481cb61b1fe" Jan 26 13:30:36 crc kubenswrapper[4844]: I0126 13:30:36.340259 4844 scope.go:117] "RemoveContainer" containerID="1b85fee309ae0e4dbc8b160f74806d6d702e7676b68d662560a47c021cd5f8a1" Jan 26 13:30:36 crc kubenswrapper[4844]: I0126 13:30:36.382914 4844 scope.go:117] "RemoveContainer" containerID="5077f0e26a12144f58d459cbf7f199370b10cdd16c8f8cfa2de83245276a6c35" Jan 26 13:30:36 crc kubenswrapper[4844]: I0126 13:30:36.411723 4844 scope.go:117] "RemoveContainer" containerID="174c56e0839b5e5dce7465d4fb7c8f05272878d2f83732f894eaf8713e0f80db" Jan 26 13:30:36 crc kubenswrapper[4844]: I0126 13:30:36.468552 4844 scope.go:117] "RemoveContainer" containerID="e691abdd8667adb115d62dd072d4441593a9750fc8e01125dc49f5b64d4a7274" Jan 26 13:30:41 crc kubenswrapper[4844]: I0126 13:30:41.061787 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0030-account-create-update-7kfzr"] Jan 26 13:30:41 crc kubenswrapper[4844]: I0126 13:30:41.073852 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-xg8dt"] Jan 26 13:30:41 crc kubenswrapper[4844]: I0126 13:30:41.087146 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-283b-account-create-update-9wvm2"] Jan 26 13:30:41 crc kubenswrapper[4844]: I0126 13:30:41.095289 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0030-account-create-update-7kfzr"] Jan 26 13:30:41 crc kubenswrapper[4844]: I0126 13:30:41.103258 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-tmtvd"] Jan 26 13:30:41 crc kubenswrapper[4844]: I0126 13:30:41.111003 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-283b-account-create-update-9wvm2"] Jan 26 13:30:41 crc kubenswrapper[4844]: I0126 13:30:41.118369 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-xg8dt"] Jan 26 13:30:41 crc kubenswrapper[4844]: I0126 13:30:41.125378 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-tmtvd"] Jan 26 13:30:41 crc kubenswrapper[4844]: I0126 13:30:41.341797 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="128a7603-8c83-4c8f-8484-031abaa6bc9a" path="/var/lib/kubelet/pods/128a7603-8c83-4c8f-8484-031abaa6bc9a/volumes" Jan 26 13:30:41 crc kubenswrapper[4844]: I0126 13:30:41.343015 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45ab14b0-33a9-4364-a552-16b57b9826c5" path="/var/lib/kubelet/pods/45ab14b0-33a9-4364-a552-16b57b9826c5/volumes" Jan 26 13:30:41 crc kubenswrapper[4844]: I0126 13:30:41.343733 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1e6f9c3-de48-4504-9b94-bbabcc87fc45" path="/var/lib/kubelet/pods/e1e6f9c3-de48-4504-9b94-bbabcc87fc45/volumes" Jan 26 13:30:41 crc kubenswrapper[4844]: I0126 13:30:41.344392 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2f773df-1a60-4d98-aaf9-25edd517e2e7" path="/var/lib/kubelet/pods/f2f773df-1a60-4d98-aaf9-25edd517e2e7/volumes" Jan 26 13:30:42 crc kubenswrapper[4844]: I0126 13:30:42.030843 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-b7qvz"] Jan 26 13:30:42 crc kubenswrapper[4844]: I0126 13:30:42.041706 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d54f-account-create-update-vkjxw"] Jan 26 13:30:42 crc kubenswrapper[4844]: I0126 13:30:42.052989 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-d54f-account-create-update-vkjxw"] Jan 26 13:30:42 crc kubenswrapper[4844]: I0126 13:30:42.062963 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-b7qvz"] Jan 26 13:30:43 crc kubenswrapper[4844]: I0126 13:30:43.333459 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="350afd25-a535-4c5c-9b45-85b457255769" path="/var/lib/kubelet/pods/350afd25-a535-4c5c-9b45-85b457255769/volumes" Jan 26 13:30:43 crc kubenswrapper[4844]: I0126 13:30:43.334855 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3c75b85-b9e8-4d45-93de-018fa9e10eb8" path="/var/lib/kubelet/pods/d3c75b85-b9e8-4d45-93de-018fa9e10eb8/volumes" Jan 26 13:31:20 crc kubenswrapper[4844]: I0126 13:31:20.049257 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zzp9q"] Jan 26 13:31:20 crc kubenswrapper[4844]: I0126 13:31:20.061024 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zzp9q"] Jan 26 13:31:21 crc kubenswrapper[4844]: I0126 13:31:21.334390 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe51c360-570b-4e53-9594-271a306efe47" path="/var/lib/kubelet/pods/fe51c360-570b-4e53-9594-271a306efe47/volumes" Jan 26 13:31:29 crc kubenswrapper[4844]: I0126 13:31:29.502784 4844 generic.go:334] "Generic (PLEG): container finished" podID="174270d5-d84e-4b4c-8602-31e455da67db" containerID="d584ff5752c8f2b5f4a4b7a70ef1cd6654fccb4a3e1dff06132ec5725c642a74" exitCode=0 Jan 26 13:31:29 crc kubenswrapper[4844]: I0126 13:31:29.502930 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh" event={"ID":"174270d5-d84e-4b4c-8602-31e455da67db","Type":"ContainerDied","Data":"d584ff5752c8f2b5f4a4b7a70ef1cd6654fccb4a3e1dff06132ec5725c642a74"} Jan 26 13:31:30 crc kubenswrapper[4844]: I0126 13:31:30.935486 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.072865 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/174270d5-d84e-4b4c-8602-31e455da67db-ssh-key-openstack-edpm-ipam\") pod \"174270d5-d84e-4b4c-8602-31e455da67db\" (UID: \"174270d5-d84e-4b4c-8602-31e455da67db\") " Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.072907 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6lk5\" (UniqueName: \"kubernetes.io/projected/174270d5-d84e-4b4c-8602-31e455da67db-kube-api-access-f6lk5\") pod \"174270d5-d84e-4b4c-8602-31e455da67db\" (UID: \"174270d5-d84e-4b4c-8602-31e455da67db\") " Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.073031 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/174270d5-d84e-4b4c-8602-31e455da67db-inventory\") pod \"174270d5-d84e-4b4c-8602-31e455da67db\" (UID: \"174270d5-d84e-4b4c-8602-31e455da67db\") " Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.080865 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/174270d5-d84e-4b4c-8602-31e455da67db-kube-api-access-f6lk5" (OuterVolumeSpecName: "kube-api-access-f6lk5") pod "174270d5-d84e-4b4c-8602-31e455da67db" (UID: "174270d5-d84e-4b4c-8602-31e455da67db"). InnerVolumeSpecName "kube-api-access-f6lk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.112336 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/174270d5-d84e-4b4c-8602-31e455da67db-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "174270d5-d84e-4b4c-8602-31e455da67db" (UID: "174270d5-d84e-4b4c-8602-31e455da67db"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.112965 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/174270d5-d84e-4b4c-8602-31e455da67db-inventory" (OuterVolumeSpecName: "inventory") pod "174270d5-d84e-4b4c-8602-31e455da67db" (UID: "174270d5-d84e-4b4c-8602-31e455da67db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.176967 4844 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/174270d5-d84e-4b4c-8602-31e455da67db-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.177022 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6lk5\" (UniqueName: \"kubernetes.io/projected/174270d5-d84e-4b4c-8602-31e455da67db-kube-api-access-f6lk5\") on node \"crc\" DevicePath \"\"" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.177043 4844 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/174270d5-d84e-4b4c-8602-31e455da67db-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.526170 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh" event={"ID":"174270d5-d84e-4b4c-8602-31e455da67db","Type":"ContainerDied","Data":"faa9ee3595e5a3cbc3b1e16054b876f74c1803967a814a081f1bc2ae68f48a58"} Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.526208 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faa9ee3595e5a3cbc3b1e16054b876f74c1803967a814a081f1bc2ae68f48a58" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.526287 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.633942 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n"] Jan 26 13:31:31 crc kubenswrapper[4844]: E0126 13:31:31.634460 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d" containerName="extract-utilities" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.634482 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d" containerName="extract-utilities" Jan 26 13:31:31 crc kubenswrapper[4844]: E0126 13:31:31.634513 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d" containerName="extract-content" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.634524 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d" containerName="extract-content" Jan 26 13:31:31 crc kubenswrapper[4844]: E0126 13:31:31.634533 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d" containerName="registry-server" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.634542 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d" containerName="registry-server" Jan 26 13:31:31 crc kubenswrapper[4844]: E0126 13:31:31.634559 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174270d5-d84e-4b4c-8602-31e455da67db" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.634567 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="174270d5-d84e-4b4c-8602-31e455da67db" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.634920 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f8a55a-3a35-4983-9d60-7cb1c8ebae5d" containerName="registry-server" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.634949 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="174270d5-d84e-4b4c-8602-31e455da67db" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.635756 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.638281 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.638540 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.638921 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r4j2z" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.639409 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.651034 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n"] Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.788991 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a2f9b87-b8bf-456e-84a4-6e1736d30419-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-br56n\" (UID: \"5a2f9b87-b8bf-456e-84a4-6e1736d30419\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.789053 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a2f9b87-b8bf-456e-84a4-6e1736d30419-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-br56n\" (UID: \"5a2f9b87-b8bf-456e-84a4-6e1736d30419\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.789133 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxqr4\" (UniqueName: \"kubernetes.io/projected/5a2f9b87-b8bf-456e-84a4-6e1736d30419-kube-api-access-sxqr4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-br56n\" (UID: \"5a2f9b87-b8bf-456e-84a4-6e1736d30419\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.891676 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a2f9b87-b8bf-456e-84a4-6e1736d30419-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-br56n\" (UID: \"5a2f9b87-b8bf-456e-84a4-6e1736d30419\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.891751 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a2f9b87-b8bf-456e-84a4-6e1736d30419-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-br56n\" (UID: \"5a2f9b87-b8bf-456e-84a4-6e1736d30419\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.891854 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxqr4\" (UniqueName: \"kubernetes.io/projected/5a2f9b87-b8bf-456e-84a4-6e1736d30419-kube-api-access-sxqr4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-br56n\" (UID: \"5a2f9b87-b8bf-456e-84a4-6e1736d30419\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.895460 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a2f9b87-b8bf-456e-84a4-6e1736d30419-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-br56n\" (UID: \"5a2f9b87-b8bf-456e-84a4-6e1736d30419\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.896012 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a2f9b87-b8bf-456e-84a4-6e1736d30419-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-br56n\" (UID: \"5a2f9b87-b8bf-456e-84a4-6e1736d30419\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.907709 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxqr4\" (UniqueName: \"kubernetes.io/projected/5a2f9b87-b8bf-456e-84a4-6e1736d30419-kube-api-access-sxqr4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-br56n\" (UID: \"5a2f9b87-b8bf-456e-84a4-6e1736d30419\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n" Jan 26 13:31:31 crc kubenswrapper[4844]: I0126 13:31:31.962655 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n" Jan 26 13:31:32 crc kubenswrapper[4844]: I0126 13:31:32.565495 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n"] Jan 26 13:31:33 crc kubenswrapper[4844]: I0126 13:31:33.552454 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n" event={"ID":"5a2f9b87-b8bf-456e-84a4-6e1736d30419","Type":"ContainerStarted","Data":"b825a8de76dbc3782c692e62bf294353384e43d8aa97979237c4bd2525329507"} Jan 26 13:31:33 crc kubenswrapper[4844]: I0126 13:31:33.552883 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n" event={"ID":"5a2f9b87-b8bf-456e-84a4-6e1736d30419","Type":"ContainerStarted","Data":"6d41e7b902686fb51782693d612d1056e3975f902061beaabe590f008f208ccd"} Jan 26 13:31:33 crc kubenswrapper[4844]: I0126 13:31:33.572395 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n" podStartSLOduration=2.0999632679999998 podStartE2EDuration="2.572377004s" podCreationTimestamp="2026-01-26 13:31:31 +0000 UTC" firstStartedPulling="2026-01-26 13:31:32.574742469 +0000 UTC m=+2869.508110091" lastFinishedPulling="2026-01-26 13:31:33.047156175 +0000 UTC m=+2869.980523827" observedRunningTime="2026-01-26 13:31:33.571566925 +0000 UTC m=+2870.504934547" watchObservedRunningTime="2026-01-26 13:31:33.572377004 +0000 UTC m=+2870.505744626" Jan 26 13:31:36 crc kubenswrapper[4844]: I0126 13:31:36.365176 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:31:36 crc kubenswrapper[4844]: I0126 13:31:36.365770 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:31:36 crc kubenswrapper[4844]: I0126 13:31:36.693932 4844 scope.go:117] "RemoveContainer" containerID="53d31a38f20640160be24f81f12b860c4fb49014a90855c2be74dd8c724ca30f" Jan 26 13:31:36 crc kubenswrapper[4844]: I0126 13:31:36.742002 4844 scope.go:117] "RemoveContainer" containerID="f3799052a6007ff7000f2f5af51fbb50a7629f3e69822b58170bbe78e47f1778" Jan 26 13:31:36 crc kubenswrapper[4844]: I0126 13:31:36.790157 4844 scope.go:117] "RemoveContainer" containerID="e0262519b155b73755b64de131f5e0324b481c529587ff763040d7d536c1b239" Jan 26 13:31:36 crc kubenswrapper[4844]: I0126 13:31:36.835902 4844 scope.go:117] "RemoveContainer" containerID="6c7b03e86844b459b44e8486544be870ded31af9c0b80856aeb6e609961f8293" Jan 26 13:31:36 crc kubenswrapper[4844]: I0126 13:31:36.876828 4844 scope.go:117] "RemoveContainer" containerID="6986d618c1b78ec057f4069c455ebf61fee56a5b5cea6f809543eb33afd56ea3" Jan 26 13:31:36 crc kubenswrapper[4844]: I0126 13:31:36.940177 4844 scope.go:117] "RemoveContainer" containerID="980d12d51bd9c2c7f0ccf62a8c48bfc35a9dd560ca475a82fbf79ddc4c794690" Jan 26 13:31:36 crc kubenswrapper[4844]: I0126 13:31:36.970469 4844 scope.go:117] "RemoveContainer" containerID="ceb926bf0aa70465619da3341e9a87d889aa8d8db7ac32233c5911ac147e0e45" Jan 26 13:31:38 crc kubenswrapper[4844]: I0126 13:31:38.612380 4844 generic.go:334] "Generic (PLEG): container finished" podID="5a2f9b87-b8bf-456e-84a4-6e1736d30419" containerID="b825a8de76dbc3782c692e62bf294353384e43d8aa97979237c4bd2525329507" exitCode=0 Jan 26 13:31:38 crc kubenswrapper[4844]: I0126 13:31:38.612458 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n" event={"ID":"5a2f9b87-b8bf-456e-84a4-6e1736d30419","Type":"ContainerDied","Data":"b825a8de76dbc3782c692e62bf294353384e43d8aa97979237c4bd2525329507"} Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.076523 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.110083 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a2f9b87-b8bf-456e-84a4-6e1736d30419-ssh-key-openstack-edpm-ipam\") pod \"5a2f9b87-b8bf-456e-84a4-6e1736d30419\" (UID: \"5a2f9b87-b8bf-456e-84a4-6e1736d30419\") " Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.110195 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a2f9b87-b8bf-456e-84a4-6e1736d30419-inventory\") pod \"5a2f9b87-b8bf-456e-84a4-6e1736d30419\" (UID: \"5a2f9b87-b8bf-456e-84a4-6e1736d30419\") " Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.110259 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxqr4\" (UniqueName: \"kubernetes.io/projected/5a2f9b87-b8bf-456e-84a4-6e1736d30419-kube-api-access-sxqr4\") pod \"5a2f9b87-b8bf-456e-84a4-6e1736d30419\" (UID: \"5a2f9b87-b8bf-456e-84a4-6e1736d30419\") " Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.116745 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a2f9b87-b8bf-456e-84a4-6e1736d30419-kube-api-access-sxqr4" (OuterVolumeSpecName: "kube-api-access-sxqr4") pod "5a2f9b87-b8bf-456e-84a4-6e1736d30419" (UID: "5a2f9b87-b8bf-456e-84a4-6e1736d30419"). InnerVolumeSpecName "kube-api-access-sxqr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.144777 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a2f9b87-b8bf-456e-84a4-6e1736d30419-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5a2f9b87-b8bf-456e-84a4-6e1736d30419" (UID: "5a2f9b87-b8bf-456e-84a4-6e1736d30419"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.162402 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a2f9b87-b8bf-456e-84a4-6e1736d30419-inventory" (OuterVolumeSpecName: "inventory") pod "5a2f9b87-b8bf-456e-84a4-6e1736d30419" (UID: "5a2f9b87-b8bf-456e-84a4-6e1736d30419"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.213307 4844 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a2f9b87-b8bf-456e-84a4-6e1736d30419-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.213536 4844 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a2f9b87-b8bf-456e-84a4-6e1736d30419-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.213650 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxqr4\" (UniqueName: \"kubernetes.io/projected/5a2f9b87-b8bf-456e-84a4-6e1736d30419-kube-api-access-sxqr4\") on node \"crc\" DevicePath \"\"" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.640538 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n" event={"ID":"5a2f9b87-b8bf-456e-84a4-6e1736d30419","Type":"ContainerDied","Data":"6d41e7b902686fb51782693d612d1056e3975f902061beaabe590f008f208ccd"} Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.640593 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d41e7b902686fb51782693d612d1056e3975f902061beaabe590f008f208ccd" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.640956 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-br56n" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.800850 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg"] Jan 26 13:31:40 crc kubenswrapper[4844]: E0126 13:31:40.805007 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2f9b87-b8bf-456e-84a4-6e1736d30419" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.805227 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2f9b87-b8bf-456e-84a4-6e1736d30419" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.805909 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a2f9b87-b8bf-456e-84a4-6e1736d30419" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.807974 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.812081 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg"] Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.818753 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r4j2z" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.818795 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.819009 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.819035 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.826521 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js9wh\" (UniqueName: \"kubernetes.io/projected/5ecdea0f-9b03-400a-a835-4f93cd02b1de-kube-api-access-js9wh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wvxxg\" (UID: \"5ecdea0f-9b03-400a-a835-4f93cd02b1de\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.834834 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ecdea0f-9b03-400a-a835-4f93cd02b1de-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wvxxg\" (UID: \"5ecdea0f-9b03-400a-a835-4f93cd02b1de\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.835017 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ecdea0f-9b03-400a-a835-4f93cd02b1de-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wvxxg\" (UID: \"5ecdea0f-9b03-400a-a835-4f93cd02b1de\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.937614 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ecdea0f-9b03-400a-a835-4f93cd02b1de-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wvxxg\" (UID: \"5ecdea0f-9b03-400a-a835-4f93cd02b1de\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.937797 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ecdea0f-9b03-400a-a835-4f93cd02b1de-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wvxxg\" (UID: \"5ecdea0f-9b03-400a-a835-4f93cd02b1de\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.937892 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js9wh\" (UniqueName: \"kubernetes.io/projected/5ecdea0f-9b03-400a-a835-4f93cd02b1de-kube-api-access-js9wh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wvxxg\" (UID: \"5ecdea0f-9b03-400a-a835-4f93cd02b1de\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.942799 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ecdea0f-9b03-400a-a835-4f93cd02b1de-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wvxxg\" (UID: \"5ecdea0f-9b03-400a-a835-4f93cd02b1de\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.947320 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ecdea0f-9b03-400a-a835-4f93cd02b1de-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wvxxg\" (UID: \"5ecdea0f-9b03-400a-a835-4f93cd02b1de\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg" Jan 26 13:31:40 crc kubenswrapper[4844]: I0126 13:31:40.962405 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js9wh\" (UniqueName: \"kubernetes.io/projected/5ecdea0f-9b03-400a-a835-4f93cd02b1de-kube-api-access-js9wh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wvxxg\" (UID: \"5ecdea0f-9b03-400a-a835-4f93cd02b1de\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg" Jan 26 13:31:41 crc kubenswrapper[4844]: I0126 13:31:41.142956 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg" Jan 26 13:31:41 crc kubenswrapper[4844]: I0126 13:31:41.545498 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg"] Jan 26 13:31:41 crc kubenswrapper[4844]: I0126 13:31:41.649877 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg" event={"ID":"5ecdea0f-9b03-400a-a835-4f93cd02b1de","Type":"ContainerStarted","Data":"1bd5f73455e9d7cd1c12de511ed0c4d1396137b7648275fb3ba8c2fff0e057cb"} Jan 26 13:31:42 crc kubenswrapper[4844]: I0126 13:31:42.666037 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg" event={"ID":"5ecdea0f-9b03-400a-a835-4f93cd02b1de","Type":"ContainerStarted","Data":"6562e145588dcdee51e77094b0b9667e5662f5ab673b2d5ac1f9823aabb444ea"} Jan 26 13:31:42 crc kubenswrapper[4844]: I0126 13:31:42.686786 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg" podStartSLOduration=2.26275692 podStartE2EDuration="2.686764971s" podCreationTimestamp="2026-01-26 13:31:40 +0000 UTC" firstStartedPulling="2026-01-26 13:31:41.548353095 +0000 UTC m=+2878.481720707" lastFinishedPulling="2026-01-26 13:31:41.972361116 +0000 UTC m=+2878.905728758" observedRunningTime="2026-01-26 13:31:42.684374543 +0000 UTC m=+2879.617742185" watchObservedRunningTime="2026-01-26 13:31:42.686764971 +0000 UTC m=+2879.620132593" Jan 26 13:31:44 crc kubenswrapper[4844]: I0126 13:31:44.047418 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-btlm2"] Jan 26 13:31:44 crc kubenswrapper[4844]: I0126 13:31:44.062786 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-btlm2"] Jan 26 13:31:45 crc kubenswrapper[4844]: I0126 13:31:45.328872 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ac64bcd-c0e5-44c8-9c11-abede4806663" path="/var/lib/kubelet/pods/1ac64bcd-c0e5-44c8-9c11-abede4806663/volumes" Jan 26 13:31:48 crc kubenswrapper[4844]: I0126 13:31:48.028012 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9gsdl"] Jan 26 13:31:48 crc kubenswrapper[4844]: I0126 13:31:48.036075 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9gsdl"] Jan 26 13:31:49 crc kubenswrapper[4844]: I0126 13:31:49.327342 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f37882c-17e3-4c70-a309-ee70392fed88" path="/var/lib/kubelet/pods/0f37882c-17e3-4c70-a309-ee70392fed88/volumes" Jan 26 13:32:06 crc kubenswrapper[4844]: I0126 13:32:06.365391 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:32:06 crc kubenswrapper[4844]: I0126 13:32:06.365845 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:32:26 crc kubenswrapper[4844]: I0126 13:32:26.115055 4844 generic.go:334] "Generic (PLEG): container finished" podID="5ecdea0f-9b03-400a-a835-4f93cd02b1de" containerID="6562e145588dcdee51e77094b0b9667e5662f5ab673b2d5ac1f9823aabb444ea" exitCode=0 Jan 26 13:32:26 crc kubenswrapper[4844]: I0126 13:32:26.115180 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg" event={"ID":"5ecdea0f-9b03-400a-a835-4f93cd02b1de","Type":"ContainerDied","Data":"6562e145588dcdee51e77094b0b9667e5662f5ab673b2d5ac1f9823aabb444ea"} Jan 26 13:32:27 crc kubenswrapper[4844]: I0126 13:32:27.577823 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg" Jan 26 13:32:27 crc kubenswrapper[4844]: I0126 13:32:27.664641 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ecdea0f-9b03-400a-a835-4f93cd02b1de-inventory\") pod \"5ecdea0f-9b03-400a-a835-4f93cd02b1de\" (UID: \"5ecdea0f-9b03-400a-a835-4f93cd02b1de\") " Jan 26 13:32:27 crc kubenswrapper[4844]: I0126 13:32:27.664723 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ecdea0f-9b03-400a-a835-4f93cd02b1de-ssh-key-openstack-edpm-ipam\") pod \"5ecdea0f-9b03-400a-a835-4f93cd02b1de\" (UID: \"5ecdea0f-9b03-400a-a835-4f93cd02b1de\") " Jan 26 13:32:27 crc kubenswrapper[4844]: I0126 13:32:27.664776 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js9wh\" (UniqueName: \"kubernetes.io/projected/5ecdea0f-9b03-400a-a835-4f93cd02b1de-kube-api-access-js9wh\") pod \"5ecdea0f-9b03-400a-a835-4f93cd02b1de\" (UID: \"5ecdea0f-9b03-400a-a835-4f93cd02b1de\") " Jan 26 13:32:27 crc kubenswrapper[4844]: I0126 13:32:27.670806 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ecdea0f-9b03-400a-a835-4f93cd02b1de-kube-api-access-js9wh" (OuterVolumeSpecName: "kube-api-access-js9wh") pod "5ecdea0f-9b03-400a-a835-4f93cd02b1de" (UID: "5ecdea0f-9b03-400a-a835-4f93cd02b1de"). InnerVolumeSpecName "kube-api-access-js9wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:32:27 crc kubenswrapper[4844]: I0126 13:32:27.693527 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ecdea0f-9b03-400a-a835-4f93cd02b1de-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5ecdea0f-9b03-400a-a835-4f93cd02b1de" (UID: "5ecdea0f-9b03-400a-a835-4f93cd02b1de"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:32:27 crc kubenswrapper[4844]: I0126 13:32:27.713799 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ecdea0f-9b03-400a-a835-4f93cd02b1de-inventory" (OuterVolumeSpecName: "inventory") pod "5ecdea0f-9b03-400a-a835-4f93cd02b1de" (UID: "5ecdea0f-9b03-400a-a835-4f93cd02b1de"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:32:27 crc kubenswrapper[4844]: I0126 13:32:27.767168 4844 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ecdea0f-9b03-400a-a835-4f93cd02b1de-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:32:27 crc kubenswrapper[4844]: I0126 13:32:27.767208 4844 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ecdea0f-9b03-400a-a835-4f93cd02b1de-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:32:27 crc kubenswrapper[4844]: I0126 13:32:27.767219 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js9wh\" (UniqueName: \"kubernetes.io/projected/5ecdea0f-9b03-400a-a835-4f93cd02b1de-kube-api-access-js9wh\") on node \"crc\" DevicePath \"\"" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.141760 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg" event={"ID":"5ecdea0f-9b03-400a-a835-4f93cd02b1de","Type":"ContainerDied","Data":"1bd5f73455e9d7cd1c12de511ed0c4d1396137b7648275fb3ba8c2fff0e057cb"} Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.142064 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bd5f73455e9d7cd1c12de511ed0c4d1396137b7648275fb3ba8c2fff0e057cb" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.141821 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvxxg" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.231204 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt"] Jan 26 13:32:28 crc kubenswrapper[4844]: E0126 13:32:28.231829 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ecdea0f-9b03-400a-a835-4f93cd02b1de" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.231852 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ecdea0f-9b03-400a-a835-4f93cd02b1de" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.232094 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ecdea0f-9b03-400a-a835-4f93cd02b1de" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.232884 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.234550 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r4j2z" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.235140 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.235901 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.236190 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.287588 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt"] Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.378327 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3c8b898-d97e-461f-85df-f33653e393f7-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt\" (UID: \"d3c8b898-d97e-461f-85df-f33653e393f7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.378449 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3c8b898-d97e-461f-85df-f33653e393f7-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt\" (UID: \"d3c8b898-d97e-461f-85df-f33653e393f7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.378507 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5hv8\" (UniqueName: \"kubernetes.io/projected/d3c8b898-d97e-461f-85df-f33653e393f7-kube-api-access-v5hv8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt\" (UID: \"d3c8b898-d97e-461f-85df-f33653e393f7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.480377 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3c8b898-d97e-461f-85df-f33653e393f7-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt\" (UID: \"d3c8b898-d97e-461f-85df-f33653e393f7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.480579 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5hv8\" (UniqueName: \"kubernetes.io/projected/d3c8b898-d97e-461f-85df-f33653e393f7-kube-api-access-v5hv8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt\" (UID: \"d3c8b898-d97e-461f-85df-f33653e393f7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.480880 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3c8b898-d97e-461f-85df-f33653e393f7-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt\" (UID: \"d3c8b898-d97e-461f-85df-f33653e393f7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.485952 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3c8b898-d97e-461f-85df-f33653e393f7-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt\" (UID: \"d3c8b898-d97e-461f-85df-f33653e393f7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.486281 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3c8b898-d97e-461f-85df-f33653e393f7-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt\" (UID: \"d3c8b898-d97e-461f-85df-f33653e393f7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.508901 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5hv8\" (UniqueName: \"kubernetes.io/projected/d3c8b898-d97e-461f-85df-f33653e393f7-kube-api-access-v5hv8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt\" (UID: \"d3c8b898-d97e-461f-85df-f33653e393f7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt" Jan 26 13:32:28 crc kubenswrapper[4844]: I0126 13:32:28.556967 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt" Jan 26 13:32:29 crc kubenswrapper[4844]: I0126 13:32:29.135116 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt"] Jan 26 13:32:29 crc kubenswrapper[4844]: I0126 13:32:29.144442 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 13:32:29 crc kubenswrapper[4844]: I0126 13:32:29.159356 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt" event={"ID":"d3c8b898-d97e-461f-85df-f33653e393f7","Type":"ContainerStarted","Data":"e4556f0cc1345fb1913c314476d96fa35a926dba71582ad9b3d340f14e1ccc1b"} Jan 26 13:32:30 crc kubenswrapper[4844]: I0126 13:32:30.046655 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-qqmng"] Jan 26 13:32:30 crc kubenswrapper[4844]: I0126 13:32:30.057152 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-qqmng"] Jan 26 13:32:31 crc kubenswrapper[4844]: I0126 13:32:31.179231 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt" event={"ID":"d3c8b898-d97e-461f-85df-f33653e393f7","Type":"ContainerStarted","Data":"965458695ae4f4b7fa04b516e8c16e340bc51783acb95af9caf9756d49cfa817"} Jan 26 13:32:31 crc kubenswrapper[4844]: I0126 13:32:31.329420 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fddb4ee-fddd-45f3-bc91-21073647af94" path="/var/lib/kubelet/pods/9fddb4ee-fddd-45f3-bc91-21073647af94/volumes" Jan 26 13:32:32 crc kubenswrapper[4844]: I0126 13:32:32.818420 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt" podStartSLOduration=3.985534266 podStartE2EDuration="4.818406148s" podCreationTimestamp="2026-01-26 13:32:28 +0000 UTC" firstStartedPulling="2026-01-26 13:32:29.144147359 +0000 UTC m=+2926.077514981" lastFinishedPulling="2026-01-26 13:32:29.977019251 +0000 UTC m=+2926.910386863" observedRunningTime="2026-01-26 13:32:31.206160731 +0000 UTC m=+2928.139528343" watchObservedRunningTime="2026-01-26 13:32:32.818406148 +0000 UTC m=+2929.751773760" Jan 26 13:32:32 crc kubenswrapper[4844]: I0126 13:32:32.822343 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sz829"] Jan 26 13:32:32 crc kubenswrapper[4844]: I0126 13:32:32.824237 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sz829" Jan 26 13:32:32 crc kubenswrapper[4844]: I0126 13:32:32.848113 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sz829"] Jan 26 13:32:32 crc kubenswrapper[4844]: I0126 13:32:32.875656 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr6zh\" (UniqueName: \"kubernetes.io/projected/b9425964-da05-4f59-af70-c907a4256532-kube-api-access-qr6zh\") pod \"redhat-operators-sz829\" (UID: \"b9425964-da05-4f59-af70-c907a4256532\") " pod="openshift-marketplace/redhat-operators-sz829" Jan 26 13:32:32 crc kubenswrapper[4844]: I0126 13:32:32.875730 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9425964-da05-4f59-af70-c907a4256532-utilities\") pod \"redhat-operators-sz829\" (UID: \"b9425964-da05-4f59-af70-c907a4256532\") " pod="openshift-marketplace/redhat-operators-sz829" Jan 26 13:32:32 crc kubenswrapper[4844]: I0126 13:32:32.875979 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9425964-da05-4f59-af70-c907a4256532-catalog-content\") pod \"redhat-operators-sz829\" (UID: \"b9425964-da05-4f59-af70-c907a4256532\") " pod="openshift-marketplace/redhat-operators-sz829" Jan 26 13:32:32 crc kubenswrapper[4844]: I0126 13:32:32.978208 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr6zh\" (UniqueName: \"kubernetes.io/projected/b9425964-da05-4f59-af70-c907a4256532-kube-api-access-qr6zh\") pod \"redhat-operators-sz829\" (UID: \"b9425964-da05-4f59-af70-c907a4256532\") " pod="openshift-marketplace/redhat-operators-sz829" Jan 26 13:32:32 crc kubenswrapper[4844]: I0126 13:32:32.978292 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9425964-da05-4f59-af70-c907a4256532-utilities\") pod \"redhat-operators-sz829\" (UID: \"b9425964-da05-4f59-af70-c907a4256532\") " pod="openshift-marketplace/redhat-operators-sz829" Jan 26 13:32:32 crc kubenswrapper[4844]: I0126 13:32:32.978452 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9425964-da05-4f59-af70-c907a4256532-catalog-content\") pod \"redhat-operators-sz829\" (UID: \"b9425964-da05-4f59-af70-c907a4256532\") " pod="openshift-marketplace/redhat-operators-sz829" Jan 26 13:32:32 crc kubenswrapper[4844]: I0126 13:32:32.978843 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9425964-da05-4f59-af70-c907a4256532-utilities\") pod \"redhat-operators-sz829\" (UID: \"b9425964-da05-4f59-af70-c907a4256532\") " pod="openshift-marketplace/redhat-operators-sz829" Jan 26 13:32:32 crc kubenswrapper[4844]: I0126 13:32:32.978880 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9425964-da05-4f59-af70-c907a4256532-catalog-content\") pod \"redhat-operators-sz829\" (UID: \"b9425964-da05-4f59-af70-c907a4256532\") " pod="openshift-marketplace/redhat-operators-sz829" Jan 26 13:32:32 crc kubenswrapper[4844]: I0126 13:32:32.998968 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr6zh\" (UniqueName: \"kubernetes.io/projected/b9425964-da05-4f59-af70-c907a4256532-kube-api-access-qr6zh\") pod \"redhat-operators-sz829\" (UID: \"b9425964-da05-4f59-af70-c907a4256532\") " pod="openshift-marketplace/redhat-operators-sz829" Jan 26 13:32:33 crc kubenswrapper[4844]: I0126 13:32:33.157930 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sz829" Jan 26 13:32:33 crc kubenswrapper[4844]: W0126 13:32:33.642825 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9425964_da05_4f59_af70_c907a4256532.slice/crio-a3921b711461efa302edef97e8136c79664d5c971ac81a4062cf8cbd73710122 WatchSource:0}: Error finding container a3921b711461efa302edef97e8136c79664d5c971ac81a4062cf8cbd73710122: Status 404 returned error can't find the container with id a3921b711461efa302edef97e8136c79664d5c971ac81a4062cf8cbd73710122 Jan 26 13:32:33 crc kubenswrapper[4844]: I0126 13:32:33.642935 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sz829"] Jan 26 13:32:34 crc kubenswrapper[4844]: I0126 13:32:34.208001 4844 generic.go:334] "Generic (PLEG): container finished" podID="b9425964-da05-4f59-af70-c907a4256532" containerID="39b5b1000e3d45d74047e780517e9d433554ba5a540d1e86eb2738d480e99ebb" exitCode=0 Jan 26 13:32:34 crc kubenswrapper[4844]: I0126 13:32:34.208052 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sz829" event={"ID":"b9425964-da05-4f59-af70-c907a4256532","Type":"ContainerDied","Data":"39b5b1000e3d45d74047e780517e9d433554ba5a540d1e86eb2738d480e99ebb"} Jan 26 13:32:34 crc kubenswrapper[4844]: I0126 13:32:34.208080 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sz829" event={"ID":"b9425964-da05-4f59-af70-c907a4256532","Type":"ContainerStarted","Data":"a3921b711461efa302edef97e8136c79664d5c971ac81a4062cf8cbd73710122"} Jan 26 13:32:35 crc kubenswrapper[4844]: I0126 13:32:35.216493 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sz829" event={"ID":"b9425964-da05-4f59-af70-c907a4256532","Type":"ContainerStarted","Data":"f95b45ab22452c1873ad283237a8e2a18688309f53b5828380ef17447bf62be1"} Jan 26 13:32:36 crc kubenswrapper[4844]: I0126 13:32:36.364883 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:32:36 crc kubenswrapper[4844]: I0126 13:32:36.365228 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:32:36 crc kubenswrapper[4844]: I0126 13:32:36.365277 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 13:32:36 crc kubenswrapper[4844]: I0126 13:32:36.366027 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:32:36 crc kubenswrapper[4844]: I0126 13:32:36.366086 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" gracePeriod=600 Jan 26 13:32:37 crc kubenswrapper[4844]: I0126 13:32:37.124688 4844 scope.go:117] "RemoveContainer" containerID="ad802f8ed2a654a2cd9bad0b9806289567cc77e1509066e980825a5b53f5aa16" Jan 26 13:32:37 crc kubenswrapper[4844]: I0126 13:32:37.241131 4844 generic.go:334] "Generic (PLEG): container finished" podID="b9425964-da05-4f59-af70-c907a4256532" containerID="f95b45ab22452c1873ad283237a8e2a18688309f53b5828380ef17447bf62be1" exitCode=0 Jan 26 13:32:37 crc kubenswrapper[4844]: I0126 13:32:37.241185 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sz829" event={"ID":"b9425964-da05-4f59-af70-c907a4256532","Type":"ContainerDied","Data":"f95b45ab22452c1873ad283237a8e2a18688309f53b5828380ef17447bf62be1"} Jan 26 13:32:37 crc kubenswrapper[4844]: I0126 13:32:37.315291 4844 scope.go:117] "RemoveContainer" containerID="1ad177eb0e519c75e9d75bcdb7b4a0fdeceb08a4f5b1a961b3c0c6567ed1d6f5" Jan 26 13:32:37 crc kubenswrapper[4844]: I0126 13:32:37.367819 4844 scope.go:117] "RemoveContainer" containerID="1bb4993fb205800439b0a8823ccb1d8840270fab753df601ee0cb69703f656d8" Jan 26 13:32:38 crc kubenswrapper[4844]: I0126 13:32:38.251821 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sz829" event={"ID":"b9425964-da05-4f59-af70-c907a4256532","Type":"ContainerStarted","Data":"9a89e88c7fa6899b3ff24e392bac8551ea6dcba1b306044238f39c2da67a7400"} Jan 26 13:32:38 crc kubenswrapper[4844]: I0126 13:32:38.255744 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" exitCode=0 Jan 26 13:32:38 crc kubenswrapper[4844]: I0126 13:32:38.255783 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e"} Jan 26 13:32:38 crc kubenswrapper[4844]: I0126 13:32:38.255833 4844 scope.go:117] "RemoveContainer" containerID="a82b801a0f9019b696e73b93e7bd511e023d38ac840f413770a1b3ad588c4466" Jan 26 13:32:38 crc kubenswrapper[4844]: I0126 13:32:38.273366 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sz829" podStartSLOduration=2.488217563 podStartE2EDuration="6.273349575s" podCreationTimestamp="2026-01-26 13:32:32 +0000 UTC" firstStartedPulling="2026-01-26 13:32:34.210031055 +0000 UTC m=+2931.143398667" lastFinishedPulling="2026-01-26 13:32:37.995163057 +0000 UTC m=+2934.928530679" observedRunningTime="2026-01-26 13:32:38.270226919 +0000 UTC m=+2935.203594531" watchObservedRunningTime="2026-01-26 13:32:38.273349575 +0000 UTC m=+2935.206717187" Jan 26 13:32:38 crc kubenswrapper[4844]: E0126 13:32:38.311294 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:32:39 crc kubenswrapper[4844]: I0126 13:32:39.268045 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:32:39 crc kubenswrapper[4844]: E0126 13:32:39.268608 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:32:43 crc kubenswrapper[4844]: I0126 13:32:43.158676 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sz829" Jan 26 13:32:43 crc kubenswrapper[4844]: I0126 13:32:43.159191 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sz829" Jan 26 13:32:44 crc kubenswrapper[4844]: I0126 13:32:44.209622 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sz829" podUID="b9425964-da05-4f59-af70-c907a4256532" containerName="registry-server" probeResult="failure" output=< Jan 26 13:32:44 crc kubenswrapper[4844]: timeout: failed to connect service ":50051" within 1s Jan 26 13:32:44 crc kubenswrapper[4844]: > Jan 26 13:32:52 crc kubenswrapper[4844]: I0126 13:32:52.313711 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:32:52 crc kubenswrapper[4844]: E0126 13:32:52.315041 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:32:53 crc kubenswrapper[4844]: I0126 13:32:53.213023 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sz829" Jan 26 13:32:53 crc kubenswrapper[4844]: I0126 13:32:53.259442 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sz829" Jan 26 13:32:53 crc kubenswrapper[4844]: I0126 13:32:53.452724 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sz829"] Jan 26 13:32:54 crc kubenswrapper[4844]: I0126 13:32:54.425627 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sz829" podUID="b9425964-da05-4f59-af70-c907a4256532" containerName="registry-server" containerID="cri-o://9a89e88c7fa6899b3ff24e392bac8551ea6dcba1b306044238f39c2da67a7400" gracePeriod=2 Jan 26 13:32:54 crc kubenswrapper[4844]: I0126 13:32:54.868075 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sz829" Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.051254 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9425964-da05-4f59-af70-c907a4256532-catalog-content\") pod \"b9425964-da05-4f59-af70-c907a4256532\" (UID: \"b9425964-da05-4f59-af70-c907a4256532\") " Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.051549 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9425964-da05-4f59-af70-c907a4256532-utilities\") pod \"b9425964-da05-4f59-af70-c907a4256532\" (UID: \"b9425964-da05-4f59-af70-c907a4256532\") " Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.051760 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr6zh\" (UniqueName: \"kubernetes.io/projected/b9425964-da05-4f59-af70-c907a4256532-kube-api-access-qr6zh\") pod \"b9425964-da05-4f59-af70-c907a4256532\" (UID: \"b9425964-da05-4f59-af70-c907a4256532\") " Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.052808 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9425964-da05-4f59-af70-c907a4256532-utilities" (OuterVolumeSpecName: "utilities") pod "b9425964-da05-4f59-af70-c907a4256532" (UID: "b9425964-da05-4f59-af70-c907a4256532"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.058740 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9425964-da05-4f59-af70-c907a4256532-kube-api-access-qr6zh" (OuterVolumeSpecName: "kube-api-access-qr6zh") pod "b9425964-da05-4f59-af70-c907a4256532" (UID: "b9425964-da05-4f59-af70-c907a4256532"). InnerVolumeSpecName "kube-api-access-qr6zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.158978 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9425964-da05-4f59-af70-c907a4256532-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.159039 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr6zh\" (UniqueName: \"kubernetes.io/projected/b9425964-da05-4f59-af70-c907a4256532-kube-api-access-qr6zh\") on node \"crc\" DevicePath \"\"" Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.181288 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9425964-da05-4f59-af70-c907a4256532-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9425964-da05-4f59-af70-c907a4256532" (UID: "b9425964-da05-4f59-af70-c907a4256532"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.260759 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9425964-da05-4f59-af70-c907a4256532-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.439795 4844 generic.go:334] "Generic (PLEG): container finished" podID="b9425964-da05-4f59-af70-c907a4256532" containerID="9a89e88c7fa6899b3ff24e392bac8551ea6dcba1b306044238f39c2da67a7400" exitCode=0 Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.439843 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sz829" event={"ID":"b9425964-da05-4f59-af70-c907a4256532","Type":"ContainerDied","Data":"9a89e88c7fa6899b3ff24e392bac8551ea6dcba1b306044238f39c2da67a7400"} Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.439881 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sz829" event={"ID":"b9425964-da05-4f59-af70-c907a4256532","Type":"ContainerDied","Data":"a3921b711461efa302edef97e8136c79664d5c971ac81a4062cf8cbd73710122"} Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.439903 4844 scope.go:117] "RemoveContainer" containerID="9a89e88c7fa6899b3ff24e392bac8551ea6dcba1b306044238f39c2da67a7400" Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.439903 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sz829" Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.468960 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sz829"] Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.471908 4844 scope.go:117] "RemoveContainer" containerID="f95b45ab22452c1873ad283237a8e2a18688309f53b5828380ef17447bf62be1" Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.477178 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sz829"] Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.497128 4844 scope.go:117] "RemoveContainer" containerID="39b5b1000e3d45d74047e780517e9d433554ba5a540d1e86eb2738d480e99ebb" Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.551309 4844 scope.go:117] "RemoveContainer" containerID="9a89e88c7fa6899b3ff24e392bac8551ea6dcba1b306044238f39c2da67a7400" Jan 26 13:32:55 crc kubenswrapper[4844]: E0126 13:32:55.552208 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a89e88c7fa6899b3ff24e392bac8551ea6dcba1b306044238f39c2da67a7400\": container with ID starting with 9a89e88c7fa6899b3ff24e392bac8551ea6dcba1b306044238f39c2da67a7400 not found: ID does not exist" containerID="9a89e88c7fa6899b3ff24e392bac8551ea6dcba1b306044238f39c2da67a7400" Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.552341 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a89e88c7fa6899b3ff24e392bac8551ea6dcba1b306044238f39c2da67a7400"} err="failed to get container status \"9a89e88c7fa6899b3ff24e392bac8551ea6dcba1b306044238f39c2da67a7400\": rpc error: code = NotFound desc = could not find container \"9a89e88c7fa6899b3ff24e392bac8551ea6dcba1b306044238f39c2da67a7400\": container with ID starting with 9a89e88c7fa6899b3ff24e392bac8551ea6dcba1b306044238f39c2da67a7400 not found: ID does not exist" Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.552454 4844 scope.go:117] "RemoveContainer" containerID="f95b45ab22452c1873ad283237a8e2a18688309f53b5828380ef17447bf62be1" Jan 26 13:32:55 crc kubenswrapper[4844]: E0126 13:32:55.552947 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f95b45ab22452c1873ad283237a8e2a18688309f53b5828380ef17447bf62be1\": container with ID starting with f95b45ab22452c1873ad283237a8e2a18688309f53b5828380ef17447bf62be1 not found: ID does not exist" containerID="f95b45ab22452c1873ad283237a8e2a18688309f53b5828380ef17447bf62be1" Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.552996 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f95b45ab22452c1873ad283237a8e2a18688309f53b5828380ef17447bf62be1"} err="failed to get container status \"f95b45ab22452c1873ad283237a8e2a18688309f53b5828380ef17447bf62be1\": rpc error: code = NotFound desc = could not find container \"f95b45ab22452c1873ad283237a8e2a18688309f53b5828380ef17447bf62be1\": container with ID starting with f95b45ab22452c1873ad283237a8e2a18688309f53b5828380ef17447bf62be1 not found: ID does not exist" Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.553035 4844 scope.go:117] "RemoveContainer" containerID="39b5b1000e3d45d74047e780517e9d433554ba5a540d1e86eb2738d480e99ebb" Jan 26 13:32:55 crc kubenswrapper[4844]: E0126 13:32:55.553387 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39b5b1000e3d45d74047e780517e9d433554ba5a540d1e86eb2738d480e99ebb\": container with ID starting with 39b5b1000e3d45d74047e780517e9d433554ba5a540d1e86eb2738d480e99ebb not found: ID does not exist" containerID="39b5b1000e3d45d74047e780517e9d433554ba5a540d1e86eb2738d480e99ebb" Jan 26 13:32:55 crc kubenswrapper[4844]: I0126 13:32:55.553443 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39b5b1000e3d45d74047e780517e9d433554ba5a540d1e86eb2738d480e99ebb"} err="failed to get container status \"39b5b1000e3d45d74047e780517e9d433554ba5a540d1e86eb2738d480e99ebb\": rpc error: code = NotFound desc = could not find container \"39b5b1000e3d45d74047e780517e9d433554ba5a540d1e86eb2738d480e99ebb\": container with ID starting with 39b5b1000e3d45d74047e780517e9d433554ba5a540d1e86eb2738d480e99ebb not found: ID does not exist" Jan 26 13:32:57 crc kubenswrapper[4844]: I0126 13:32:57.325797 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9425964-da05-4f59-af70-c907a4256532" path="/var/lib/kubelet/pods/b9425964-da05-4f59-af70-c907a4256532/volumes" Jan 26 13:33:07 crc kubenswrapper[4844]: I0126 13:33:07.316344 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:33:07 crc kubenswrapper[4844]: E0126 13:33:07.318222 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:33:19 crc kubenswrapper[4844]: I0126 13:33:19.314534 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:33:19 crc kubenswrapper[4844]: E0126 13:33:19.315379 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:33:26 crc kubenswrapper[4844]: I0126 13:33:26.766293 4844 generic.go:334] "Generic (PLEG): container finished" podID="d3c8b898-d97e-461f-85df-f33653e393f7" containerID="965458695ae4f4b7fa04b516e8c16e340bc51783acb95af9caf9756d49cfa817" exitCode=0 Jan 26 13:33:26 crc kubenswrapper[4844]: I0126 13:33:26.766801 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt" event={"ID":"d3c8b898-d97e-461f-85df-f33653e393f7","Type":"ContainerDied","Data":"965458695ae4f4b7fa04b516e8c16e340bc51783acb95af9caf9756d49cfa817"} Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.242452 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.395305 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3c8b898-d97e-461f-85df-f33653e393f7-ssh-key-openstack-edpm-ipam\") pod \"d3c8b898-d97e-461f-85df-f33653e393f7\" (UID: \"d3c8b898-d97e-461f-85df-f33653e393f7\") " Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.395386 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3c8b898-d97e-461f-85df-f33653e393f7-inventory\") pod \"d3c8b898-d97e-461f-85df-f33653e393f7\" (UID: \"d3c8b898-d97e-461f-85df-f33653e393f7\") " Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.395515 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5hv8\" (UniqueName: \"kubernetes.io/projected/d3c8b898-d97e-461f-85df-f33653e393f7-kube-api-access-v5hv8\") pod \"d3c8b898-d97e-461f-85df-f33653e393f7\" (UID: \"d3c8b898-d97e-461f-85df-f33653e393f7\") " Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.400786 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3c8b898-d97e-461f-85df-f33653e393f7-kube-api-access-v5hv8" (OuterVolumeSpecName: "kube-api-access-v5hv8") pod "d3c8b898-d97e-461f-85df-f33653e393f7" (UID: "d3c8b898-d97e-461f-85df-f33653e393f7"). InnerVolumeSpecName "kube-api-access-v5hv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.437475 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c8b898-d97e-461f-85df-f33653e393f7-inventory" (OuterVolumeSpecName: "inventory") pod "d3c8b898-d97e-461f-85df-f33653e393f7" (UID: "d3c8b898-d97e-461f-85df-f33653e393f7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.440195 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c8b898-d97e-461f-85df-f33653e393f7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d3c8b898-d97e-461f-85df-f33653e393f7" (UID: "d3c8b898-d97e-461f-85df-f33653e393f7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.499033 4844 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3c8b898-d97e-461f-85df-f33653e393f7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.499073 4844 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3c8b898-d97e-461f-85df-f33653e393f7-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.499083 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5hv8\" (UniqueName: \"kubernetes.io/projected/d3c8b898-d97e-461f-85df-f33653e393f7-kube-api-access-v5hv8\") on node \"crc\" DevicePath \"\"" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.792509 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt" event={"ID":"d3c8b898-d97e-461f-85df-f33653e393f7","Type":"ContainerDied","Data":"e4556f0cc1345fb1913c314476d96fa35a926dba71582ad9b3d340f14e1ccc1b"} Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.792547 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4556f0cc1345fb1913c314476d96fa35a926dba71582ad9b3d340f14e1ccc1b" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.792562 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.872534 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4fkj8"] Jan 26 13:33:28 crc kubenswrapper[4844]: E0126 13:33:28.873008 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3c8b898-d97e-461f-85df-f33653e393f7" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.873035 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c8b898-d97e-461f-85df-f33653e393f7" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:33:28 crc kubenswrapper[4844]: E0126 13:33:28.873052 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9425964-da05-4f59-af70-c907a4256532" containerName="extract-content" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.873061 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9425964-da05-4f59-af70-c907a4256532" containerName="extract-content" Jan 26 13:33:28 crc kubenswrapper[4844]: E0126 13:33:28.873082 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9425964-da05-4f59-af70-c907a4256532" containerName="extract-utilities" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.873091 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9425964-da05-4f59-af70-c907a4256532" containerName="extract-utilities" Jan 26 13:33:28 crc kubenswrapper[4844]: E0126 13:33:28.873106 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9425964-da05-4f59-af70-c907a4256532" containerName="registry-server" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.873114 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9425964-da05-4f59-af70-c907a4256532" containerName="registry-server" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.873358 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9425964-da05-4f59-af70-c907a4256532" containerName="registry-server" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.873388 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3c8b898-d97e-461f-85df-f33653e393f7" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.874574 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4fkj8" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.877075 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.877299 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.877458 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r4j2z" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.878677 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:33:28 crc kubenswrapper[4844]: I0126 13:33:28.882523 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4fkj8"] Jan 26 13:33:29 crc kubenswrapper[4844]: I0126 13:33:29.010563 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d45310a6-48b5-455c-960c-5aaaa0a5b469-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4fkj8\" (UID: \"d45310a6-48b5-455c-960c-5aaaa0a5b469\") " pod="openstack/ssh-known-hosts-edpm-deployment-4fkj8" Jan 26 13:33:29 crc kubenswrapper[4844]: I0126 13:33:29.010642 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m6pg\" (UniqueName: \"kubernetes.io/projected/d45310a6-48b5-455c-960c-5aaaa0a5b469-kube-api-access-7m6pg\") pod \"ssh-known-hosts-edpm-deployment-4fkj8\" (UID: \"d45310a6-48b5-455c-960c-5aaaa0a5b469\") " pod="openstack/ssh-known-hosts-edpm-deployment-4fkj8" Jan 26 13:33:29 crc kubenswrapper[4844]: I0126 13:33:29.011171 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d45310a6-48b5-455c-960c-5aaaa0a5b469-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4fkj8\" (UID: \"d45310a6-48b5-455c-960c-5aaaa0a5b469\") " pod="openstack/ssh-known-hosts-edpm-deployment-4fkj8" Jan 26 13:33:29 crc kubenswrapper[4844]: I0126 13:33:29.113555 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d45310a6-48b5-455c-960c-5aaaa0a5b469-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4fkj8\" (UID: \"d45310a6-48b5-455c-960c-5aaaa0a5b469\") " pod="openstack/ssh-known-hosts-edpm-deployment-4fkj8" Jan 26 13:33:29 crc kubenswrapper[4844]: I0126 13:33:29.113944 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d45310a6-48b5-455c-960c-5aaaa0a5b469-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4fkj8\" (UID: \"d45310a6-48b5-455c-960c-5aaaa0a5b469\") " pod="openstack/ssh-known-hosts-edpm-deployment-4fkj8" Jan 26 13:33:29 crc kubenswrapper[4844]: I0126 13:33:29.114146 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m6pg\" (UniqueName: \"kubernetes.io/projected/d45310a6-48b5-455c-960c-5aaaa0a5b469-kube-api-access-7m6pg\") pod \"ssh-known-hosts-edpm-deployment-4fkj8\" (UID: \"d45310a6-48b5-455c-960c-5aaaa0a5b469\") " pod="openstack/ssh-known-hosts-edpm-deployment-4fkj8" Jan 26 13:33:29 crc kubenswrapper[4844]: I0126 13:33:29.117923 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d45310a6-48b5-455c-960c-5aaaa0a5b469-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4fkj8\" (UID: \"d45310a6-48b5-455c-960c-5aaaa0a5b469\") " pod="openstack/ssh-known-hosts-edpm-deployment-4fkj8" Jan 26 13:33:29 crc kubenswrapper[4844]: I0126 13:33:29.118376 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d45310a6-48b5-455c-960c-5aaaa0a5b469-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4fkj8\" (UID: \"d45310a6-48b5-455c-960c-5aaaa0a5b469\") " pod="openstack/ssh-known-hosts-edpm-deployment-4fkj8" Jan 26 13:33:29 crc kubenswrapper[4844]: I0126 13:33:29.134516 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m6pg\" (UniqueName: \"kubernetes.io/projected/d45310a6-48b5-455c-960c-5aaaa0a5b469-kube-api-access-7m6pg\") pod \"ssh-known-hosts-edpm-deployment-4fkj8\" (UID: \"d45310a6-48b5-455c-960c-5aaaa0a5b469\") " pod="openstack/ssh-known-hosts-edpm-deployment-4fkj8" Jan 26 13:33:29 crc kubenswrapper[4844]: I0126 13:33:29.225032 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4fkj8" Jan 26 13:33:29 crc kubenswrapper[4844]: I0126 13:33:29.760090 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4fkj8"] Jan 26 13:33:29 crc kubenswrapper[4844]: W0126 13:33:29.771024 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd45310a6_48b5_455c_960c_5aaaa0a5b469.slice/crio-ae41ac74b739c17a8d6b71e585b58dd50522f398b9eceaf3d82c2b3527d4749b WatchSource:0}: Error finding container ae41ac74b739c17a8d6b71e585b58dd50522f398b9eceaf3d82c2b3527d4749b: Status 404 returned error can't find the container with id ae41ac74b739c17a8d6b71e585b58dd50522f398b9eceaf3d82c2b3527d4749b Jan 26 13:33:29 crc kubenswrapper[4844]: I0126 13:33:29.803765 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4fkj8" event={"ID":"d45310a6-48b5-455c-960c-5aaaa0a5b469","Type":"ContainerStarted","Data":"ae41ac74b739c17a8d6b71e585b58dd50522f398b9eceaf3d82c2b3527d4749b"} Jan 26 13:33:30 crc kubenswrapper[4844]: I0126 13:33:30.816540 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4fkj8" event={"ID":"d45310a6-48b5-455c-960c-5aaaa0a5b469","Type":"ContainerStarted","Data":"85ce255ee039360f9b2c4598f4915d3dd271479021eceff39420a766d30ad469"} Jan 26 13:33:30 crc kubenswrapper[4844]: I0126 13:33:30.871212 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-4fkj8" podStartSLOduration=2.418939318 podStartE2EDuration="2.871188115s" podCreationTimestamp="2026-01-26 13:33:28 +0000 UTC" firstStartedPulling="2026-01-26 13:33:29.774531793 +0000 UTC m=+2986.707899405" lastFinishedPulling="2026-01-26 13:33:30.22678058 +0000 UTC m=+2987.160148202" observedRunningTime="2026-01-26 13:33:30.863105028 +0000 UTC m=+2987.796472690" watchObservedRunningTime="2026-01-26 13:33:30.871188115 +0000 UTC m=+2987.804555727" Jan 26 13:33:34 crc kubenswrapper[4844]: I0126 13:33:34.313855 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:33:34 crc kubenswrapper[4844]: E0126 13:33:34.314632 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:33:37 crc kubenswrapper[4844]: I0126 13:33:37.697078 4844 scope.go:117] "RemoveContainer" containerID="6440dbbdc677b69f20d36d2b627b3af8260145adec21e1f6152cfb0df5e424a1" Jan 26 13:33:37 crc kubenswrapper[4844]: I0126 13:33:37.735192 4844 scope.go:117] "RemoveContainer" containerID="efa84074c1bad4763b7b95cf2b26573828faf0da880df918d869b295de8f498d" Jan 26 13:33:37 crc kubenswrapper[4844]: I0126 13:33:37.877208 4844 generic.go:334] "Generic (PLEG): container finished" podID="d45310a6-48b5-455c-960c-5aaaa0a5b469" containerID="85ce255ee039360f9b2c4598f4915d3dd271479021eceff39420a766d30ad469" exitCode=0 Jan 26 13:33:37 crc kubenswrapper[4844]: I0126 13:33:37.877291 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4fkj8" event={"ID":"d45310a6-48b5-455c-960c-5aaaa0a5b469","Type":"ContainerDied","Data":"85ce255ee039360f9b2c4598f4915d3dd271479021eceff39420a766d30ad469"} Jan 26 13:33:39 crc kubenswrapper[4844]: I0126 13:33:39.323822 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4fkj8" Jan 26 13:33:39 crc kubenswrapper[4844]: I0126 13:33:39.435050 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d45310a6-48b5-455c-960c-5aaaa0a5b469-ssh-key-openstack-edpm-ipam\") pod \"d45310a6-48b5-455c-960c-5aaaa0a5b469\" (UID: \"d45310a6-48b5-455c-960c-5aaaa0a5b469\") " Jan 26 13:33:39 crc kubenswrapper[4844]: I0126 13:33:39.435153 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m6pg\" (UniqueName: \"kubernetes.io/projected/d45310a6-48b5-455c-960c-5aaaa0a5b469-kube-api-access-7m6pg\") pod \"d45310a6-48b5-455c-960c-5aaaa0a5b469\" (UID: \"d45310a6-48b5-455c-960c-5aaaa0a5b469\") " Jan 26 13:33:39 crc kubenswrapper[4844]: I0126 13:33:39.435277 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d45310a6-48b5-455c-960c-5aaaa0a5b469-inventory-0\") pod \"d45310a6-48b5-455c-960c-5aaaa0a5b469\" (UID: \"d45310a6-48b5-455c-960c-5aaaa0a5b469\") " Jan 26 13:33:39 crc kubenswrapper[4844]: I0126 13:33:39.441705 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45310a6-48b5-455c-960c-5aaaa0a5b469-kube-api-access-7m6pg" (OuterVolumeSpecName: "kube-api-access-7m6pg") pod "d45310a6-48b5-455c-960c-5aaaa0a5b469" (UID: "d45310a6-48b5-455c-960c-5aaaa0a5b469"). InnerVolumeSpecName "kube-api-access-7m6pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:33:39 crc kubenswrapper[4844]: I0126 13:33:39.467816 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45310a6-48b5-455c-960c-5aaaa0a5b469-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "d45310a6-48b5-455c-960c-5aaaa0a5b469" (UID: "d45310a6-48b5-455c-960c-5aaaa0a5b469"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:33:39 crc kubenswrapper[4844]: I0126 13:33:39.493996 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45310a6-48b5-455c-960c-5aaaa0a5b469-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d45310a6-48b5-455c-960c-5aaaa0a5b469" (UID: "d45310a6-48b5-455c-960c-5aaaa0a5b469"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:33:39 crc kubenswrapper[4844]: I0126 13:33:39.537154 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m6pg\" (UniqueName: \"kubernetes.io/projected/d45310a6-48b5-455c-960c-5aaaa0a5b469-kube-api-access-7m6pg\") on node \"crc\" DevicePath \"\"" Jan 26 13:33:39 crc kubenswrapper[4844]: I0126 13:33:39.537193 4844 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d45310a6-48b5-455c-960c-5aaaa0a5b469-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:33:39 crc kubenswrapper[4844]: I0126 13:33:39.537206 4844 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d45310a6-48b5-455c-960c-5aaaa0a5b469-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:33:39 crc kubenswrapper[4844]: I0126 13:33:39.907171 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4fkj8" event={"ID":"d45310a6-48b5-455c-960c-5aaaa0a5b469","Type":"ContainerDied","Data":"ae41ac74b739c17a8d6b71e585b58dd50522f398b9eceaf3d82c2b3527d4749b"} Jan 26 13:33:39 crc kubenswrapper[4844]: I0126 13:33:39.907243 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae41ac74b739c17a8d6b71e585b58dd50522f398b9eceaf3d82c2b3527d4749b" Jan 26 13:33:39 crc kubenswrapper[4844]: I0126 13:33:39.907240 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4fkj8" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.001833 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q"] Jan 26 13:33:40 crc kubenswrapper[4844]: E0126 13:33:40.002511 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d45310a6-48b5-455c-960c-5aaaa0a5b469" containerName="ssh-known-hosts-edpm-deployment" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.002542 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d45310a6-48b5-455c-960c-5aaaa0a5b469" containerName="ssh-known-hosts-edpm-deployment" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.002948 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="d45310a6-48b5-455c-960c-5aaaa0a5b469" containerName="ssh-known-hosts-edpm-deployment" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.004193 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.006936 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.007874 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.009094 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.009975 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r4j2z" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.017381 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q"] Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.151631 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ff365e7-065a-41e7-a3cc-642e66989dc9-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8qp5q\" (UID: \"3ff365e7-065a-41e7-a3cc-642e66989dc9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.151891 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ff365e7-065a-41e7-a3cc-642e66989dc9-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8qp5q\" (UID: \"3ff365e7-065a-41e7-a3cc-642e66989dc9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.151947 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8c92\" (UniqueName: \"kubernetes.io/projected/3ff365e7-065a-41e7-a3cc-642e66989dc9-kube-api-access-n8c92\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8qp5q\" (UID: \"3ff365e7-065a-41e7-a3cc-642e66989dc9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.254721 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ff365e7-065a-41e7-a3cc-642e66989dc9-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8qp5q\" (UID: \"3ff365e7-065a-41e7-a3cc-642e66989dc9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.254796 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8c92\" (UniqueName: \"kubernetes.io/projected/3ff365e7-065a-41e7-a3cc-642e66989dc9-kube-api-access-n8c92\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8qp5q\" (UID: \"3ff365e7-065a-41e7-a3cc-642e66989dc9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.255020 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ff365e7-065a-41e7-a3cc-642e66989dc9-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8qp5q\" (UID: \"3ff365e7-065a-41e7-a3cc-642e66989dc9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.260467 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ff365e7-065a-41e7-a3cc-642e66989dc9-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8qp5q\" (UID: \"3ff365e7-065a-41e7-a3cc-642e66989dc9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.260579 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ff365e7-065a-41e7-a3cc-642e66989dc9-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8qp5q\" (UID: \"3ff365e7-065a-41e7-a3cc-642e66989dc9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.288116 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8c92\" (UniqueName: \"kubernetes.io/projected/3ff365e7-065a-41e7-a3cc-642e66989dc9-kube-api-access-n8c92\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8qp5q\" (UID: \"3ff365e7-065a-41e7-a3cc-642e66989dc9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.330172 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q" Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.669103 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q"] Jan 26 13:33:40 crc kubenswrapper[4844]: I0126 13:33:40.917770 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q" event={"ID":"3ff365e7-065a-41e7-a3cc-642e66989dc9","Type":"ContainerStarted","Data":"36bee8bae68789528a1b1aa5709a8ff16392567a43156434f7602daf816a99d3"} Jan 26 13:33:41 crc kubenswrapper[4844]: I0126 13:33:41.927305 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q" event={"ID":"3ff365e7-065a-41e7-a3cc-642e66989dc9","Type":"ContainerStarted","Data":"ee815a9afd118f71eb3578245abd510e65118b655ee242be4334d337bfcb271f"} Jan 26 13:33:41 crc kubenswrapper[4844]: I0126 13:33:41.959234 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q" podStartSLOduration=2.54196767 podStartE2EDuration="2.959218496s" podCreationTimestamp="2026-01-26 13:33:39 +0000 UTC" firstStartedPulling="2026-01-26 13:33:40.665260292 +0000 UTC m=+2997.598627924" lastFinishedPulling="2026-01-26 13:33:41.082511118 +0000 UTC m=+2998.015878750" observedRunningTime="2026-01-26 13:33:41.948755172 +0000 UTC m=+2998.882122784" watchObservedRunningTime="2026-01-26 13:33:41.959218496 +0000 UTC m=+2998.892586108" Jan 26 13:33:46 crc kubenswrapper[4844]: I0126 13:33:46.313816 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:33:46 crc kubenswrapper[4844]: E0126 13:33:46.315019 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:33:52 crc kubenswrapper[4844]: I0126 13:33:52.030962 4844 generic.go:334] "Generic (PLEG): container finished" podID="3ff365e7-065a-41e7-a3cc-642e66989dc9" containerID="ee815a9afd118f71eb3578245abd510e65118b655ee242be4334d337bfcb271f" exitCode=0 Jan 26 13:33:52 crc kubenswrapper[4844]: I0126 13:33:52.031520 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q" event={"ID":"3ff365e7-065a-41e7-a3cc-642e66989dc9","Type":"ContainerDied","Data":"ee815a9afd118f71eb3578245abd510e65118b655ee242be4334d337bfcb271f"} Jan 26 13:33:53 crc kubenswrapper[4844]: I0126 13:33:53.506208 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q" Jan 26 13:33:53 crc kubenswrapper[4844]: I0126 13:33:53.652642 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8c92\" (UniqueName: \"kubernetes.io/projected/3ff365e7-065a-41e7-a3cc-642e66989dc9-kube-api-access-n8c92\") pod \"3ff365e7-065a-41e7-a3cc-642e66989dc9\" (UID: \"3ff365e7-065a-41e7-a3cc-642e66989dc9\") " Jan 26 13:33:53 crc kubenswrapper[4844]: I0126 13:33:53.652803 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ff365e7-065a-41e7-a3cc-642e66989dc9-ssh-key-openstack-edpm-ipam\") pod \"3ff365e7-065a-41e7-a3cc-642e66989dc9\" (UID: \"3ff365e7-065a-41e7-a3cc-642e66989dc9\") " Jan 26 13:33:53 crc kubenswrapper[4844]: I0126 13:33:53.653016 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ff365e7-065a-41e7-a3cc-642e66989dc9-inventory\") pod \"3ff365e7-065a-41e7-a3cc-642e66989dc9\" (UID: \"3ff365e7-065a-41e7-a3cc-642e66989dc9\") " Jan 26 13:33:53 crc kubenswrapper[4844]: I0126 13:33:53.662890 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ff365e7-065a-41e7-a3cc-642e66989dc9-kube-api-access-n8c92" (OuterVolumeSpecName: "kube-api-access-n8c92") pod "3ff365e7-065a-41e7-a3cc-642e66989dc9" (UID: "3ff365e7-065a-41e7-a3cc-642e66989dc9"). InnerVolumeSpecName "kube-api-access-n8c92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:33:53 crc kubenswrapper[4844]: I0126 13:33:53.694172 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff365e7-065a-41e7-a3cc-642e66989dc9-inventory" (OuterVolumeSpecName: "inventory") pod "3ff365e7-065a-41e7-a3cc-642e66989dc9" (UID: "3ff365e7-065a-41e7-a3cc-642e66989dc9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:33:53 crc kubenswrapper[4844]: I0126 13:33:53.694658 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff365e7-065a-41e7-a3cc-642e66989dc9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3ff365e7-065a-41e7-a3cc-642e66989dc9" (UID: "3ff365e7-065a-41e7-a3cc-642e66989dc9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:33:53 crc kubenswrapper[4844]: I0126 13:33:53.757266 4844 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ff365e7-065a-41e7-a3cc-642e66989dc9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:33:53 crc kubenswrapper[4844]: I0126 13:33:53.757446 4844 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ff365e7-065a-41e7-a3cc-642e66989dc9-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:33:53 crc kubenswrapper[4844]: I0126 13:33:53.757560 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8c92\" (UniqueName: \"kubernetes.io/projected/3ff365e7-065a-41e7-a3cc-642e66989dc9-kube-api-access-n8c92\") on node \"crc\" DevicePath \"\"" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.060274 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q" event={"ID":"3ff365e7-065a-41e7-a3cc-642e66989dc9","Type":"ContainerDied","Data":"36bee8bae68789528a1b1aa5709a8ff16392567a43156434f7602daf816a99d3"} Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.060345 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36bee8bae68789528a1b1aa5709a8ff16392567a43156434f7602daf816a99d3" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.060446 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8qp5q" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.197457 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z"] Jan 26 13:33:54 crc kubenswrapper[4844]: E0126 13:33:54.198418 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ff365e7-065a-41e7-a3cc-642e66989dc9" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.198451 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff365e7-065a-41e7-a3cc-642e66989dc9" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.213825 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ff365e7-065a-41e7-a3cc-642e66989dc9" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.215722 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.218979 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z"] Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.219061 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.221079 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.221102 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r4j2z" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.221279 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.371109 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97nm2\" (UniqueName: \"kubernetes.io/projected/342e7682-6393-4c70-9c22-5108b5473dc0-kube-api-access-97nm2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z\" (UID: \"342e7682-6393-4c70-9c22-5108b5473dc0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.371758 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/342e7682-6393-4c70-9c22-5108b5473dc0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z\" (UID: \"342e7682-6393-4c70-9c22-5108b5473dc0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.371800 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/342e7682-6393-4c70-9c22-5108b5473dc0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z\" (UID: \"342e7682-6393-4c70-9c22-5108b5473dc0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.475248 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/342e7682-6393-4c70-9c22-5108b5473dc0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z\" (UID: \"342e7682-6393-4c70-9c22-5108b5473dc0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.475326 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/342e7682-6393-4c70-9c22-5108b5473dc0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z\" (UID: \"342e7682-6393-4c70-9c22-5108b5473dc0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.475498 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97nm2\" (UniqueName: \"kubernetes.io/projected/342e7682-6393-4c70-9c22-5108b5473dc0-kube-api-access-97nm2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z\" (UID: \"342e7682-6393-4c70-9c22-5108b5473dc0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.480902 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/342e7682-6393-4c70-9c22-5108b5473dc0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z\" (UID: \"342e7682-6393-4c70-9c22-5108b5473dc0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.482010 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/342e7682-6393-4c70-9c22-5108b5473dc0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z\" (UID: \"342e7682-6393-4c70-9c22-5108b5473dc0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.501443 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97nm2\" (UniqueName: \"kubernetes.io/projected/342e7682-6393-4c70-9c22-5108b5473dc0-kube-api-access-97nm2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z\" (UID: \"342e7682-6393-4c70-9c22-5108b5473dc0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z" Jan 26 13:33:54 crc kubenswrapper[4844]: I0126 13:33:54.541094 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z" Jan 26 13:33:55 crc kubenswrapper[4844]: I0126 13:33:55.152310 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z"] Jan 26 13:33:55 crc kubenswrapper[4844]: W0126 13:33:55.160366 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod342e7682_6393_4c70_9c22_5108b5473dc0.slice/crio-106f568bb38c41cc0f546b690d68bdcaba61888ced207d711aae692bb0a4de5b WatchSource:0}: Error finding container 106f568bb38c41cc0f546b690d68bdcaba61888ced207d711aae692bb0a4de5b: Status 404 returned error can't find the container with id 106f568bb38c41cc0f546b690d68bdcaba61888ced207d711aae692bb0a4de5b Jan 26 13:33:56 crc kubenswrapper[4844]: I0126 13:33:56.087895 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z" event={"ID":"342e7682-6393-4c70-9c22-5108b5473dc0","Type":"ContainerStarted","Data":"6b579a8c29e8eb3e9e91a699dcccac86eaf8c64ee01acfb5ab5621ba0b87a49c"} Jan 26 13:33:56 crc kubenswrapper[4844]: I0126 13:33:56.088313 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z" event={"ID":"342e7682-6393-4c70-9c22-5108b5473dc0","Type":"ContainerStarted","Data":"106f568bb38c41cc0f546b690d68bdcaba61888ced207d711aae692bb0a4de5b"} Jan 26 13:33:56 crc kubenswrapper[4844]: I0126 13:33:56.116634 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z" podStartSLOduration=1.580436135 podStartE2EDuration="2.116612791s" podCreationTimestamp="2026-01-26 13:33:54 +0000 UTC" firstStartedPulling="2026-01-26 13:33:55.164740358 +0000 UTC m=+3012.098107980" lastFinishedPulling="2026-01-26 13:33:55.700917024 +0000 UTC m=+3012.634284636" observedRunningTime="2026-01-26 13:33:56.109207041 +0000 UTC m=+3013.042574763" watchObservedRunningTime="2026-01-26 13:33:56.116612791 +0000 UTC m=+3013.049980413" Jan 26 13:33:57 crc kubenswrapper[4844]: I0126 13:33:57.313561 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:33:57 crc kubenswrapper[4844]: E0126 13:33:57.314096 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:34:06 crc kubenswrapper[4844]: I0126 13:34:06.190536 4844 generic.go:334] "Generic (PLEG): container finished" podID="342e7682-6393-4c70-9c22-5108b5473dc0" containerID="6b579a8c29e8eb3e9e91a699dcccac86eaf8c64ee01acfb5ab5621ba0b87a49c" exitCode=0 Jan 26 13:34:06 crc kubenswrapper[4844]: I0126 13:34:06.190642 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z" event={"ID":"342e7682-6393-4c70-9c22-5108b5473dc0","Type":"ContainerDied","Data":"6b579a8c29e8eb3e9e91a699dcccac86eaf8c64ee01acfb5ab5621ba0b87a49c"} Jan 26 13:34:07 crc kubenswrapper[4844]: I0126 13:34:07.707207 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z" Jan 26 13:34:07 crc kubenswrapper[4844]: I0126 13:34:07.851238 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/342e7682-6393-4c70-9c22-5108b5473dc0-inventory\") pod \"342e7682-6393-4c70-9c22-5108b5473dc0\" (UID: \"342e7682-6393-4c70-9c22-5108b5473dc0\") " Jan 26 13:34:07 crc kubenswrapper[4844]: I0126 13:34:07.851400 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97nm2\" (UniqueName: \"kubernetes.io/projected/342e7682-6393-4c70-9c22-5108b5473dc0-kube-api-access-97nm2\") pod \"342e7682-6393-4c70-9c22-5108b5473dc0\" (UID: \"342e7682-6393-4c70-9c22-5108b5473dc0\") " Jan 26 13:34:07 crc kubenswrapper[4844]: I0126 13:34:07.851508 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/342e7682-6393-4c70-9c22-5108b5473dc0-ssh-key-openstack-edpm-ipam\") pod \"342e7682-6393-4c70-9c22-5108b5473dc0\" (UID: \"342e7682-6393-4c70-9c22-5108b5473dc0\") " Jan 26 13:34:07 crc kubenswrapper[4844]: I0126 13:34:07.857259 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/342e7682-6393-4c70-9c22-5108b5473dc0-kube-api-access-97nm2" (OuterVolumeSpecName: "kube-api-access-97nm2") pod "342e7682-6393-4c70-9c22-5108b5473dc0" (UID: "342e7682-6393-4c70-9c22-5108b5473dc0"). InnerVolumeSpecName "kube-api-access-97nm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:34:07 crc kubenswrapper[4844]: I0126 13:34:07.878557 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/342e7682-6393-4c70-9c22-5108b5473dc0-inventory" (OuterVolumeSpecName: "inventory") pod "342e7682-6393-4c70-9c22-5108b5473dc0" (UID: "342e7682-6393-4c70-9c22-5108b5473dc0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:34:07 crc kubenswrapper[4844]: I0126 13:34:07.880518 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/342e7682-6393-4c70-9c22-5108b5473dc0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "342e7682-6393-4c70-9c22-5108b5473dc0" (UID: "342e7682-6393-4c70-9c22-5108b5473dc0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:34:07 crc kubenswrapper[4844]: I0126 13:34:07.953814 4844 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/342e7682-6393-4c70-9c22-5108b5473dc0-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:34:07 crc kubenswrapper[4844]: I0126 13:34:07.953899 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97nm2\" (UniqueName: \"kubernetes.io/projected/342e7682-6393-4c70-9c22-5108b5473dc0-kube-api-access-97nm2\") on node \"crc\" DevicePath \"\"" Jan 26 13:34:07 crc kubenswrapper[4844]: I0126 13:34:07.953911 4844 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/342e7682-6393-4c70-9c22-5108b5473dc0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.217361 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z" event={"ID":"342e7682-6393-4c70-9c22-5108b5473dc0","Type":"ContainerDied","Data":"106f568bb38c41cc0f546b690d68bdcaba61888ced207d711aae692bb0a4de5b"} Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.218762 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="106f568bb38c41cc0f546b690d68bdcaba61888ced207d711aae692bb0a4de5b" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.217460 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.307339 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4"] Jan 26 13:34:08 crc kubenswrapper[4844]: E0126 13:34:08.308669 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="342e7682-6393-4c70-9c22-5108b5473dc0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.308705 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="342e7682-6393-4c70-9c22-5108b5473dc0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.309015 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="342e7682-6393-4c70-9c22-5108b5473dc0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.310069 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.316433 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.316545 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.316658 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.316778 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r4j2z" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.316888 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.317053 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.317171 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.317371 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.330264 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4"] Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.464761 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.465123 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.465259 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndnqc\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-kube-api-access-ndnqc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.465458 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.465679 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.465877 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.466040 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.466210 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.466388 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.466582 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.466772 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.466946 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.467224 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.467374 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.570071 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.570722 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.571051 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.571351 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.571695 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.572058 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.572874 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.573169 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.574117 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.574469 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.574913 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.575232 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.575499 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndnqc\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-kube-api-access-ndnqc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.576049 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.578535 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.578702 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.579844 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.581715 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.581752 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.582949 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.583501 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.584298 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.584759 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.584855 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.584437 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.585358 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.585870 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.604560 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndnqc\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-kube-api-access-ndnqc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:08 crc kubenswrapper[4844]: I0126 13:34:08.679124 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:09 crc kubenswrapper[4844]: I0126 13:34:09.216148 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4"] Jan 26 13:34:09 crc kubenswrapper[4844]: W0126 13:34:09.217065 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7abb699_d024_4829_8882_7272c3313c67.slice/crio-d67e22d9bf8e8ddd7ebd6227edff07f60625dc63b707f738e494c6c9b913eefd WatchSource:0}: Error finding container d67e22d9bf8e8ddd7ebd6227edff07f60625dc63b707f738e494c6c9b913eefd: Status 404 returned error can't find the container with id d67e22d9bf8e8ddd7ebd6227edff07f60625dc63b707f738e494c6c9b913eefd Jan 26 13:34:09 crc kubenswrapper[4844]: I0126 13:34:09.229787 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" event={"ID":"e7abb699-d024-4829-8882-7272c3313c67","Type":"ContainerStarted","Data":"d67e22d9bf8e8ddd7ebd6227edff07f60625dc63b707f738e494c6c9b913eefd"} Jan 26 13:34:10 crc kubenswrapper[4844]: I0126 13:34:10.241351 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" event={"ID":"e7abb699-d024-4829-8882-7272c3313c67","Type":"ContainerStarted","Data":"91515be8e6e59afef54a1fd7ea1347056a34032f93ba2a0b771811dbde119656"} Jan 26 13:34:11 crc kubenswrapper[4844]: I0126 13:34:11.313546 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:34:11 crc kubenswrapper[4844]: E0126 13:34:11.314231 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:34:24 crc kubenswrapper[4844]: I0126 13:34:24.314111 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:34:24 crc kubenswrapper[4844]: E0126 13:34:24.315422 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:34:36 crc kubenswrapper[4844]: I0126 13:34:36.313935 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:34:36 crc kubenswrapper[4844]: E0126 13:34:36.315877 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:34:37 crc kubenswrapper[4844]: I0126 13:34:37.837473 4844 scope.go:117] "RemoveContainer" containerID="ce95b0a6457e98586ec34d5ea681cbd04d26f3065161bca1be9213aeefd636ec" Jan 26 13:34:51 crc kubenswrapper[4844]: I0126 13:34:51.314335 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:34:51 crc kubenswrapper[4844]: E0126 13:34:51.315900 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:34:52 crc kubenswrapper[4844]: I0126 13:34:52.686076 4844 generic.go:334] "Generic (PLEG): container finished" podID="e7abb699-d024-4829-8882-7272c3313c67" containerID="91515be8e6e59afef54a1fd7ea1347056a34032f93ba2a0b771811dbde119656" exitCode=0 Jan 26 13:34:52 crc kubenswrapper[4844]: I0126 13:34:52.686148 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" event={"ID":"e7abb699-d024-4829-8882-7272c3313c67","Type":"ContainerDied","Data":"91515be8e6e59afef54a1fd7ea1347056a34032f93ba2a0b771811dbde119656"} Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.114825 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.254685 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"e7abb699-d024-4829-8882-7272c3313c67\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.255055 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-ovn-default-certs-0\") pod \"e7abb699-d024-4829-8882-7272c3313c67\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.255094 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-ovn-combined-ca-bundle\") pod \"e7abb699-d024-4829-8882-7272c3313c67\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.255196 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-nova-combined-ca-bundle\") pod \"e7abb699-d024-4829-8882-7272c3313c67\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.255238 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-libvirt-combined-ca-bundle\") pod \"e7abb699-d024-4829-8882-7272c3313c67\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.255284 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-ssh-key-openstack-edpm-ipam\") pod \"e7abb699-d024-4829-8882-7272c3313c67\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.255391 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndnqc\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-kube-api-access-ndnqc\") pod \"e7abb699-d024-4829-8882-7272c3313c67\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.255453 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"e7abb699-d024-4829-8882-7272c3313c67\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.255490 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-neutron-metadata-combined-ca-bundle\") pod \"e7abb699-d024-4829-8882-7272c3313c67\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.255568 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-bootstrap-combined-ca-bundle\") pod \"e7abb699-d024-4829-8882-7272c3313c67\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.255631 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-telemetry-combined-ca-bundle\") pod \"e7abb699-d024-4829-8882-7272c3313c67\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.255662 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"e7abb699-d024-4829-8882-7272c3313c67\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.255731 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-inventory\") pod \"e7abb699-d024-4829-8882-7272c3313c67\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.255790 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-repo-setup-combined-ca-bundle\") pod \"e7abb699-d024-4829-8882-7272c3313c67\" (UID: \"e7abb699-d024-4829-8882-7272c3313c67\") " Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.262378 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-kube-api-access-ndnqc" (OuterVolumeSpecName: "kube-api-access-ndnqc") pod "e7abb699-d024-4829-8882-7272c3313c67" (UID: "e7abb699-d024-4829-8882-7272c3313c67"). InnerVolumeSpecName "kube-api-access-ndnqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.262693 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e7abb699-d024-4829-8882-7272c3313c67" (UID: "e7abb699-d024-4829-8882-7272c3313c67"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.262736 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e7abb699-d024-4829-8882-7272c3313c67" (UID: "e7abb699-d024-4829-8882-7272c3313c67"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.263638 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e7abb699-d024-4829-8882-7272c3313c67" (UID: "e7abb699-d024-4829-8882-7272c3313c67"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.263758 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e7abb699-d024-4829-8882-7272c3313c67" (UID: "e7abb699-d024-4829-8882-7272c3313c67"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.264189 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e7abb699-d024-4829-8882-7272c3313c67" (UID: "e7abb699-d024-4829-8882-7272c3313c67"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.265168 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e7abb699-d024-4829-8882-7272c3313c67" (UID: "e7abb699-d024-4829-8882-7272c3313c67"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.266263 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "e7abb699-d024-4829-8882-7272c3313c67" (UID: "e7abb699-d024-4829-8882-7272c3313c67"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.266768 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "e7abb699-d024-4829-8882-7272c3313c67" (UID: "e7abb699-d024-4829-8882-7272c3313c67"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.266917 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "e7abb699-d024-4829-8882-7272c3313c67" (UID: "e7abb699-d024-4829-8882-7272c3313c67"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.267121 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "e7abb699-d024-4829-8882-7272c3313c67" (UID: "e7abb699-d024-4829-8882-7272c3313c67"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.267815 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e7abb699-d024-4829-8882-7272c3313c67" (UID: "e7abb699-d024-4829-8882-7272c3313c67"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.295688 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-inventory" (OuterVolumeSpecName: "inventory") pod "e7abb699-d024-4829-8882-7272c3313c67" (UID: "e7abb699-d024-4829-8882-7272c3313c67"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.312353 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e7abb699-d024-4829-8882-7272c3313c67" (UID: "e7abb699-d024-4829-8882-7272c3313c67"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.358799 4844 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.358842 4844 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.358856 4844 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.358869 4844 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.358884 4844 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.358896 4844 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.358907 4844 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.358918 4844 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.358929 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndnqc\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-kube-api-access-ndnqc\") on node \"crc\" DevicePath \"\"" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.358940 4844 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.358956 4844 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.358968 4844 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.358979 4844 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e7abb699-d024-4829-8882-7272c3313c67-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.358991 4844 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7abb699-d024-4829-8882-7272c3313c67-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.711524 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" event={"ID":"e7abb699-d024-4829-8882-7272c3313c67","Type":"ContainerDied","Data":"d67e22d9bf8e8ddd7ebd6227edff07f60625dc63b707f738e494c6c9b913eefd"} Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.711565 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d67e22d9bf8e8ddd7ebd6227edff07f60625dc63b707f738e494c6c9b913eefd" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.711660 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.912845 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh"] Jan 26 13:34:54 crc kubenswrapper[4844]: E0126 13:34:54.913432 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7abb699-d024-4829-8882-7272c3313c67" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.913457 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7abb699-d024-4829-8882-7272c3313c67" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.913869 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7abb699-d024-4829-8882-7272c3313c67" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.914962 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.917705 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.918550 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r4j2z" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.919058 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.919753 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.920492 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:34:54 crc kubenswrapper[4844]: I0126 13:34:54.943348 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh"] Jan 26 13:34:55 crc kubenswrapper[4844]: I0126 13:34:55.073584 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5161eb41-8d1f-405a-b40f-630aad7d1925-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-svbzh\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:34:55 crc kubenswrapper[4844]: I0126 13:34:55.073699 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5161eb41-8d1f-405a-b40f-630aad7d1925-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-svbzh\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:34:55 crc kubenswrapper[4844]: I0126 13:34:55.073753 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5161eb41-8d1f-405a-b40f-630aad7d1925-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-svbzh\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:34:55 crc kubenswrapper[4844]: I0126 13:34:55.073880 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5v2j\" (UniqueName: \"kubernetes.io/projected/5161eb41-8d1f-405a-b40f-630aad7d1925-kube-api-access-n5v2j\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-svbzh\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:34:55 crc kubenswrapper[4844]: I0126 13:34:55.073946 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5161eb41-8d1f-405a-b40f-630aad7d1925-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-svbzh\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:34:55 crc kubenswrapper[4844]: I0126 13:34:55.176034 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5v2j\" (UniqueName: \"kubernetes.io/projected/5161eb41-8d1f-405a-b40f-630aad7d1925-kube-api-access-n5v2j\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-svbzh\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:34:55 crc kubenswrapper[4844]: I0126 13:34:55.176129 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5161eb41-8d1f-405a-b40f-630aad7d1925-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-svbzh\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:34:55 crc kubenswrapper[4844]: I0126 13:34:55.176216 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5161eb41-8d1f-405a-b40f-630aad7d1925-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-svbzh\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:34:55 crc kubenswrapper[4844]: I0126 13:34:55.176257 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5161eb41-8d1f-405a-b40f-630aad7d1925-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-svbzh\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:34:55 crc kubenswrapper[4844]: I0126 13:34:55.176302 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5161eb41-8d1f-405a-b40f-630aad7d1925-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-svbzh\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:34:55 crc kubenswrapper[4844]: I0126 13:34:55.177419 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5161eb41-8d1f-405a-b40f-630aad7d1925-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-svbzh\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:34:55 crc kubenswrapper[4844]: I0126 13:34:55.179824 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5161eb41-8d1f-405a-b40f-630aad7d1925-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-svbzh\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:34:55 crc kubenswrapper[4844]: I0126 13:34:55.181702 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5161eb41-8d1f-405a-b40f-630aad7d1925-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-svbzh\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:34:55 crc kubenswrapper[4844]: I0126 13:34:55.182294 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5161eb41-8d1f-405a-b40f-630aad7d1925-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-svbzh\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:34:55 crc kubenswrapper[4844]: I0126 13:34:55.192980 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5v2j\" (UniqueName: \"kubernetes.io/projected/5161eb41-8d1f-405a-b40f-630aad7d1925-kube-api-access-n5v2j\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-svbzh\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:34:55 crc kubenswrapper[4844]: I0126 13:34:55.260966 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:34:55 crc kubenswrapper[4844]: I0126 13:34:55.576985 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh"] Jan 26 13:34:55 crc kubenswrapper[4844]: W0126 13:34:55.584227 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5161eb41_8d1f_405a_b40f_630aad7d1925.slice/crio-e756f2e8abca992eaf8ef8772aff071f6154b105bc52cdea89a8c65d8c4c9fa5 WatchSource:0}: Error finding container e756f2e8abca992eaf8ef8772aff071f6154b105bc52cdea89a8c65d8c4c9fa5: Status 404 returned error can't find the container with id e756f2e8abca992eaf8ef8772aff071f6154b105bc52cdea89a8c65d8c4c9fa5 Jan 26 13:34:55 crc kubenswrapper[4844]: I0126 13:34:55.721688 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" event={"ID":"5161eb41-8d1f-405a-b40f-630aad7d1925","Type":"ContainerStarted","Data":"e756f2e8abca992eaf8ef8772aff071f6154b105bc52cdea89a8c65d8c4c9fa5"} Jan 26 13:34:56 crc kubenswrapper[4844]: I0126 13:34:56.735168 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" event={"ID":"5161eb41-8d1f-405a-b40f-630aad7d1925","Type":"ContainerStarted","Data":"41e04891030bbf90a5a1198f73431ade89bba1bc33dafb4a6bb4be2c21d94a84"} Jan 26 13:34:56 crc kubenswrapper[4844]: I0126 13:34:56.771754 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" podStartSLOduration=2.209292781 podStartE2EDuration="2.771734374s" podCreationTimestamp="2026-01-26 13:34:54 +0000 UTC" firstStartedPulling="2026-01-26 13:34:55.592368326 +0000 UTC m=+3072.525735958" lastFinishedPulling="2026-01-26 13:34:56.154809929 +0000 UTC m=+3073.088177551" observedRunningTime="2026-01-26 13:34:56.757916819 +0000 UTC m=+3073.691284451" watchObservedRunningTime="2026-01-26 13:34:56.771734374 +0000 UTC m=+3073.705101986" Jan 26 13:35:03 crc kubenswrapper[4844]: I0126 13:35:03.319555 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:35:03 crc kubenswrapper[4844]: E0126 13:35:03.320226 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:35:16 crc kubenswrapper[4844]: I0126 13:35:16.313948 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:35:16 crc kubenswrapper[4844]: E0126 13:35:16.314908 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:35:28 crc kubenswrapper[4844]: I0126 13:35:28.313285 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:35:28 crc kubenswrapper[4844]: E0126 13:35:28.314193 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:35:43 crc kubenswrapper[4844]: I0126 13:35:43.320114 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:35:43 crc kubenswrapper[4844]: E0126 13:35:43.320869 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:35:58 crc kubenswrapper[4844]: I0126 13:35:58.314448 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:35:58 crc kubenswrapper[4844]: E0126 13:35:58.315725 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:36:08 crc kubenswrapper[4844]: I0126 13:36:08.968987 4844 generic.go:334] "Generic (PLEG): container finished" podID="5161eb41-8d1f-405a-b40f-630aad7d1925" containerID="41e04891030bbf90a5a1198f73431ade89bba1bc33dafb4a6bb4be2c21d94a84" exitCode=0 Jan 26 13:36:08 crc kubenswrapper[4844]: I0126 13:36:08.969427 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" event={"ID":"5161eb41-8d1f-405a-b40f-630aad7d1925","Type":"ContainerDied","Data":"41e04891030bbf90a5a1198f73431ade89bba1bc33dafb4a6bb4be2c21d94a84"} Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.502154 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.620788 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5161eb41-8d1f-405a-b40f-630aad7d1925-inventory\") pod \"5161eb41-8d1f-405a-b40f-630aad7d1925\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.620933 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5161eb41-8d1f-405a-b40f-630aad7d1925-ssh-key-openstack-edpm-ipam\") pod \"5161eb41-8d1f-405a-b40f-630aad7d1925\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.621025 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5161eb41-8d1f-405a-b40f-630aad7d1925-ovncontroller-config-0\") pod \"5161eb41-8d1f-405a-b40f-630aad7d1925\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.621074 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5v2j\" (UniqueName: \"kubernetes.io/projected/5161eb41-8d1f-405a-b40f-630aad7d1925-kube-api-access-n5v2j\") pod \"5161eb41-8d1f-405a-b40f-630aad7d1925\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.621106 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5161eb41-8d1f-405a-b40f-630aad7d1925-ovn-combined-ca-bundle\") pod \"5161eb41-8d1f-405a-b40f-630aad7d1925\" (UID: \"5161eb41-8d1f-405a-b40f-630aad7d1925\") " Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.627887 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5161eb41-8d1f-405a-b40f-630aad7d1925-kube-api-access-n5v2j" (OuterVolumeSpecName: "kube-api-access-n5v2j") pod "5161eb41-8d1f-405a-b40f-630aad7d1925" (UID: "5161eb41-8d1f-405a-b40f-630aad7d1925"). InnerVolumeSpecName "kube-api-access-n5v2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.628182 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5161eb41-8d1f-405a-b40f-630aad7d1925-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "5161eb41-8d1f-405a-b40f-630aad7d1925" (UID: "5161eb41-8d1f-405a-b40f-630aad7d1925"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.652479 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5161eb41-8d1f-405a-b40f-630aad7d1925-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5161eb41-8d1f-405a-b40f-630aad7d1925" (UID: "5161eb41-8d1f-405a-b40f-630aad7d1925"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.664747 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5161eb41-8d1f-405a-b40f-630aad7d1925-inventory" (OuterVolumeSpecName: "inventory") pod "5161eb41-8d1f-405a-b40f-630aad7d1925" (UID: "5161eb41-8d1f-405a-b40f-630aad7d1925"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.666842 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5161eb41-8d1f-405a-b40f-630aad7d1925-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "5161eb41-8d1f-405a-b40f-630aad7d1925" (UID: "5161eb41-8d1f-405a-b40f-630aad7d1925"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.723518 4844 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5161eb41-8d1f-405a-b40f-630aad7d1925-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.723560 4844 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5161eb41-8d1f-405a-b40f-630aad7d1925-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.723574 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5v2j\" (UniqueName: \"kubernetes.io/projected/5161eb41-8d1f-405a-b40f-630aad7d1925-kube-api-access-n5v2j\") on node \"crc\" DevicePath \"\"" Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.723586 4844 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5161eb41-8d1f-405a-b40f-630aad7d1925-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.723644 4844 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5161eb41-8d1f-405a-b40f-630aad7d1925-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.995339 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" event={"ID":"5161eb41-8d1f-405a-b40f-630aad7d1925","Type":"ContainerDied","Data":"e756f2e8abca992eaf8ef8772aff071f6154b105bc52cdea89a8c65d8c4c9fa5"} Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.995386 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e756f2e8abca992eaf8ef8772aff071f6154b105bc52cdea89a8c65d8c4c9fa5" Jan 26 13:36:10 crc kubenswrapper[4844]: I0126 13:36:10.995432 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-svbzh" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.078933 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4"] Jan 26 13:36:11 crc kubenswrapper[4844]: E0126 13:36:11.079536 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5161eb41-8d1f-405a-b40f-630aad7d1925" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.079557 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="5161eb41-8d1f-405a-b40f-630aad7d1925" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.079819 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="5161eb41-8d1f-405a-b40f-630aad7d1925" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.080790 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.085094 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.085129 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.085530 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.085629 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.085862 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.086137 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r4j2z" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.090835 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4"] Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.133835 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.133901 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.133921 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.133970 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.134008 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.134026 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc42b\" (UniqueName: \"kubernetes.io/projected/38602c96-9d47-46f7-b299-c5bfc616ba99-kube-api-access-tc42b\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.235624 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc42b\" (UniqueName: \"kubernetes.io/projected/38602c96-9d47-46f7-b299-c5bfc616ba99-kube-api-access-tc42b\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.235758 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.235797 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.235815 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.235862 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.235897 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.239723 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.240531 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.241056 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.241919 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.242490 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.255209 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc42b\" (UniqueName: \"kubernetes.io/projected/38602c96-9d47-46f7-b299-c5bfc616ba99-kube-api-access-tc42b\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:11 crc kubenswrapper[4844]: I0126 13:36:11.415479 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:36:12 crc kubenswrapper[4844]: I0126 13:36:12.027066 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4"] Jan 26 13:36:13 crc kubenswrapper[4844]: I0126 13:36:13.016712 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" event={"ID":"38602c96-9d47-46f7-b299-c5bfc616ba99","Type":"ContainerStarted","Data":"6c4fe43a909eacb21174d8673b9ab6ee654683aae5c4b09053fe49e9b3f42ede"} Jan 26 13:36:13 crc kubenswrapper[4844]: I0126 13:36:13.329079 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:36:13 crc kubenswrapper[4844]: E0126 13:36:13.330147 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:36:14 crc kubenswrapper[4844]: I0126 13:36:14.028185 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" event={"ID":"38602c96-9d47-46f7-b299-c5bfc616ba99","Type":"ContainerStarted","Data":"7de94391d71d0eb85e86e993197afc26cfc138c8936348d7fbc2c83717668a58"} Jan 26 13:36:14 crc kubenswrapper[4844]: I0126 13:36:14.052281 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" podStartSLOduration=2.356893796 podStartE2EDuration="3.052256879s" podCreationTimestamp="2026-01-26 13:36:11 +0000 UTC" firstStartedPulling="2026-01-26 13:36:12.029551741 +0000 UTC m=+3148.962919353" lastFinishedPulling="2026-01-26 13:36:12.724914814 +0000 UTC m=+3149.658282436" observedRunningTime="2026-01-26 13:36:14.045203048 +0000 UTC m=+3150.978570660" watchObservedRunningTime="2026-01-26 13:36:14.052256879 +0000 UTC m=+3150.985624491" Jan 26 13:36:25 crc kubenswrapper[4844]: I0126 13:36:25.314249 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:36:25 crc kubenswrapper[4844]: E0126 13:36:25.315337 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:36:37 crc kubenswrapper[4844]: I0126 13:36:37.313649 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:36:37 crc kubenswrapper[4844]: E0126 13:36:37.314495 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:36:49 crc kubenswrapper[4844]: I0126 13:36:49.313419 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:36:49 crc kubenswrapper[4844]: E0126 13:36:49.314313 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:37:03 crc kubenswrapper[4844]: I0126 13:37:03.322853 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:37:03 crc kubenswrapper[4844]: E0126 13:37:03.324294 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:37:07 crc kubenswrapper[4844]: I0126 13:37:07.622227 4844 generic.go:334] "Generic (PLEG): container finished" podID="38602c96-9d47-46f7-b299-c5bfc616ba99" containerID="7de94391d71d0eb85e86e993197afc26cfc138c8936348d7fbc2c83717668a58" exitCode=0 Jan 26 13:37:07 crc kubenswrapper[4844]: I0126 13:37:07.622306 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" event={"ID":"38602c96-9d47-46f7-b299-c5bfc616ba99","Type":"ContainerDied","Data":"7de94391d71d0eb85e86e993197afc26cfc138c8936348d7fbc2c83717668a58"} Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.070571 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.257734 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc42b\" (UniqueName: \"kubernetes.io/projected/38602c96-9d47-46f7-b299-c5bfc616ba99-kube-api-access-tc42b\") pod \"38602c96-9d47-46f7-b299-c5bfc616ba99\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.257825 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-neutron-metadata-combined-ca-bundle\") pod \"38602c96-9d47-46f7-b299-c5bfc616ba99\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.257928 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-nova-metadata-neutron-config-0\") pod \"38602c96-9d47-46f7-b299-c5bfc616ba99\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.257971 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-ssh-key-openstack-edpm-ipam\") pod \"38602c96-9d47-46f7-b299-c5bfc616ba99\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.258030 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-inventory\") pod \"38602c96-9d47-46f7-b299-c5bfc616ba99\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.258118 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-neutron-ovn-metadata-agent-neutron-config-0\") pod \"38602c96-9d47-46f7-b299-c5bfc616ba99\" (UID: \"38602c96-9d47-46f7-b299-c5bfc616ba99\") " Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.264182 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38602c96-9d47-46f7-b299-c5bfc616ba99-kube-api-access-tc42b" (OuterVolumeSpecName: "kube-api-access-tc42b") pod "38602c96-9d47-46f7-b299-c5bfc616ba99" (UID: "38602c96-9d47-46f7-b299-c5bfc616ba99"). InnerVolumeSpecName "kube-api-access-tc42b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.269669 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "38602c96-9d47-46f7-b299-c5bfc616ba99" (UID: "38602c96-9d47-46f7-b299-c5bfc616ba99"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.293197 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "38602c96-9d47-46f7-b299-c5bfc616ba99" (UID: "38602c96-9d47-46f7-b299-c5bfc616ba99"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.296666 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "38602c96-9d47-46f7-b299-c5bfc616ba99" (UID: "38602c96-9d47-46f7-b299-c5bfc616ba99"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.322405 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-inventory" (OuterVolumeSpecName: "inventory") pod "38602c96-9d47-46f7-b299-c5bfc616ba99" (UID: "38602c96-9d47-46f7-b299-c5bfc616ba99"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.323198 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "38602c96-9d47-46f7-b299-c5bfc616ba99" (UID: "38602c96-9d47-46f7-b299-c5bfc616ba99"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.360839 4844 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.360871 4844 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.360885 4844 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.360899 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc42b\" (UniqueName: \"kubernetes.io/projected/38602c96-9d47-46f7-b299-c5bfc616ba99-kube-api-access-tc42b\") on node \"crc\" DevicePath \"\"" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.360913 4844 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.360927 4844 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/38602c96-9d47-46f7-b299-c5bfc616ba99-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.650547 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" event={"ID":"38602c96-9d47-46f7-b299-c5bfc616ba99","Type":"ContainerDied","Data":"6c4fe43a909eacb21174d8673b9ab6ee654683aae5c4b09053fe49e9b3f42ede"} Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.650835 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c4fe43a909eacb21174d8673b9ab6ee654683aae5c4b09053fe49e9b3f42ede" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.650631 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.898565 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt"] Jan 26 13:37:09 crc kubenswrapper[4844]: E0126 13:37:09.898968 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38602c96-9d47-46f7-b299-c5bfc616ba99" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.898986 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="38602c96-9d47-46f7-b299-c5bfc616ba99" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.899175 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="38602c96-9d47-46f7-b299-c5bfc616ba99" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.899793 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.905082 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.905232 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.906778 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.907032 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.907237 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r4j2z" Jan 26 13:37:09 crc kubenswrapper[4844]: I0126 13:37:09.910626 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt"] Jan 26 13:37:10 crc kubenswrapper[4844]: I0126 13:37:10.078985 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sttdt\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:37:10 crc kubenswrapper[4844]: I0126 13:37:10.079080 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sttdt\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:37:10 crc kubenswrapper[4844]: I0126 13:37:10.079104 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sttdt\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:37:10 crc kubenswrapper[4844]: I0126 13:37:10.079155 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sttdt\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:37:10 crc kubenswrapper[4844]: I0126 13:37:10.079180 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2m4k\" (UniqueName: \"kubernetes.io/projected/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-kube-api-access-z2m4k\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sttdt\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:37:10 crc kubenswrapper[4844]: I0126 13:37:10.180464 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sttdt\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:37:10 crc kubenswrapper[4844]: I0126 13:37:10.180559 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sttdt\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:37:10 crc kubenswrapper[4844]: I0126 13:37:10.180581 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sttdt\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:37:10 crc kubenswrapper[4844]: I0126 13:37:10.180677 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sttdt\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:37:10 crc kubenswrapper[4844]: I0126 13:37:10.180706 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2m4k\" (UniqueName: \"kubernetes.io/projected/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-kube-api-access-z2m4k\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sttdt\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:37:10 crc kubenswrapper[4844]: I0126 13:37:10.184819 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sttdt\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:37:10 crc kubenswrapper[4844]: I0126 13:37:10.184897 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sttdt\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:37:10 crc kubenswrapper[4844]: I0126 13:37:10.185206 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sttdt\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:37:10 crc kubenswrapper[4844]: I0126 13:37:10.185693 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sttdt\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:37:10 crc kubenswrapper[4844]: I0126 13:37:10.212144 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2m4k\" (UniqueName: \"kubernetes.io/projected/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-kube-api-access-z2m4k\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sttdt\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:37:10 crc kubenswrapper[4844]: I0126 13:37:10.284783 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:37:10 crc kubenswrapper[4844]: I0126 13:37:10.873122 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt"] Jan 26 13:37:11 crc kubenswrapper[4844]: I0126 13:37:11.668014 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" event={"ID":"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba","Type":"ContainerStarted","Data":"2fc60d06c7909c26958e2509ce3b00908af31761320ad113715778d88207da11"} Jan 26 13:37:11 crc kubenswrapper[4844]: I0126 13:37:11.668333 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" event={"ID":"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba","Type":"ContainerStarted","Data":"150caf3d8cd7227931dd113670c698a2cf65eb2396140022b426116bdb158784"} Jan 26 13:37:11 crc kubenswrapper[4844]: I0126 13:37:11.694016 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" podStartSLOduration=2.240400189 podStartE2EDuration="2.693994578s" podCreationTimestamp="2026-01-26 13:37:09 +0000 UTC" firstStartedPulling="2026-01-26 13:37:10.87530669 +0000 UTC m=+3207.808674302" lastFinishedPulling="2026-01-26 13:37:11.328901079 +0000 UTC m=+3208.262268691" observedRunningTime="2026-01-26 13:37:11.687289035 +0000 UTC m=+3208.620656647" watchObservedRunningTime="2026-01-26 13:37:11.693994578 +0000 UTC m=+3208.627362190" Jan 26 13:37:18 crc kubenswrapper[4844]: I0126 13:37:18.313230 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:37:18 crc kubenswrapper[4844]: E0126 13:37:18.314306 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:37:33 crc kubenswrapper[4844]: I0126 13:37:33.329222 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:37:33 crc kubenswrapper[4844]: E0126 13:37:33.330141 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:37:45 crc kubenswrapper[4844]: I0126 13:37:45.313305 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:37:46 crc kubenswrapper[4844]: I0126 13:37:46.032986 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"7f2e320dd842af2d2fc73841752821ae2e65a052ea9aa96d77b135f1559a71cd"} Jan 26 13:37:56 crc kubenswrapper[4844]: I0126 13:37:56.057744 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hqdmc"] Jan 26 13:37:56 crc kubenswrapper[4844]: I0126 13:37:56.062285 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqdmc" Jan 26 13:37:56 crc kubenswrapper[4844]: I0126 13:37:56.080834 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqdmc"] Jan 26 13:37:56 crc kubenswrapper[4844]: I0126 13:37:56.260495 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tzm4\" (UniqueName: \"kubernetes.io/projected/066dc227-5be8-415c-a18c-107f8da1559b-kube-api-access-7tzm4\") pod \"community-operators-hqdmc\" (UID: \"066dc227-5be8-415c-a18c-107f8da1559b\") " pod="openshift-marketplace/community-operators-hqdmc" Jan 26 13:37:56 crc kubenswrapper[4844]: I0126 13:37:56.260928 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/066dc227-5be8-415c-a18c-107f8da1559b-utilities\") pod \"community-operators-hqdmc\" (UID: \"066dc227-5be8-415c-a18c-107f8da1559b\") " pod="openshift-marketplace/community-operators-hqdmc" Jan 26 13:37:56 crc kubenswrapper[4844]: I0126 13:37:56.261106 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/066dc227-5be8-415c-a18c-107f8da1559b-catalog-content\") pod \"community-operators-hqdmc\" (UID: \"066dc227-5be8-415c-a18c-107f8da1559b\") " pod="openshift-marketplace/community-operators-hqdmc" Jan 26 13:37:56 crc kubenswrapper[4844]: I0126 13:37:56.363430 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tzm4\" (UniqueName: \"kubernetes.io/projected/066dc227-5be8-415c-a18c-107f8da1559b-kube-api-access-7tzm4\") pod \"community-operators-hqdmc\" (UID: \"066dc227-5be8-415c-a18c-107f8da1559b\") " pod="openshift-marketplace/community-operators-hqdmc" Jan 26 13:37:56 crc kubenswrapper[4844]: I0126 13:37:56.365031 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/066dc227-5be8-415c-a18c-107f8da1559b-utilities\") pod \"community-operators-hqdmc\" (UID: \"066dc227-5be8-415c-a18c-107f8da1559b\") " pod="openshift-marketplace/community-operators-hqdmc" Jan 26 13:37:56 crc kubenswrapper[4844]: I0126 13:37:56.365153 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/066dc227-5be8-415c-a18c-107f8da1559b-catalog-content\") pod \"community-operators-hqdmc\" (UID: \"066dc227-5be8-415c-a18c-107f8da1559b\") " pod="openshift-marketplace/community-operators-hqdmc" Jan 26 13:37:56 crc kubenswrapper[4844]: I0126 13:37:56.366456 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/066dc227-5be8-415c-a18c-107f8da1559b-catalog-content\") pod \"community-operators-hqdmc\" (UID: \"066dc227-5be8-415c-a18c-107f8da1559b\") " pod="openshift-marketplace/community-operators-hqdmc" Jan 26 13:37:56 crc kubenswrapper[4844]: I0126 13:37:56.366873 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/066dc227-5be8-415c-a18c-107f8da1559b-utilities\") pod \"community-operators-hqdmc\" (UID: \"066dc227-5be8-415c-a18c-107f8da1559b\") " pod="openshift-marketplace/community-operators-hqdmc" Jan 26 13:37:56 crc kubenswrapper[4844]: I0126 13:37:56.401313 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tzm4\" (UniqueName: \"kubernetes.io/projected/066dc227-5be8-415c-a18c-107f8da1559b-kube-api-access-7tzm4\") pod \"community-operators-hqdmc\" (UID: \"066dc227-5be8-415c-a18c-107f8da1559b\") " pod="openshift-marketplace/community-operators-hqdmc" Jan 26 13:37:56 crc kubenswrapper[4844]: I0126 13:37:56.412945 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqdmc" Jan 26 13:37:56 crc kubenswrapper[4844]: I0126 13:37:56.916009 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqdmc"] Jan 26 13:37:57 crc kubenswrapper[4844]: I0126 13:37:57.178825 4844 generic.go:334] "Generic (PLEG): container finished" podID="066dc227-5be8-415c-a18c-107f8da1559b" containerID="c90d139d708527958fd389f98ea017323f3f477e395eaebd3baefd4ad9ad6156" exitCode=0 Jan 26 13:37:57 crc kubenswrapper[4844]: I0126 13:37:57.178885 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqdmc" event={"ID":"066dc227-5be8-415c-a18c-107f8da1559b","Type":"ContainerDied","Data":"c90d139d708527958fd389f98ea017323f3f477e395eaebd3baefd4ad9ad6156"} Jan 26 13:37:57 crc kubenswrapper[4844]: I0126 13:37:57.179127 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqdmc" event={"ID":"066dc227-5be8-415c-a18c-107f8da1559b","Type":"ContainerStarted","Data":"5bd49916b87e0ffadfec6e7084f3a1fd79686b8b546d4f74ad938037dd9b528c"} Jan 26 13:37:57 crc kubenswrapper[4844]: I0126 13:37:57.180533 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 13:37:58 crc kubenswrapper[4844]: I0126 13:37:58.196347 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqdmc" event={"ID":"066dc227-5be8-415c-a18c-107f8da1559b","Type":"ContainerStarted","Data":"bf1c5d48a2a33b5b04f501e375f95797c316b1c8a0b604d586e79b1434423a75"} Jan 26 13:37:59 crc kubenswrapper[4844]: I0126 13:37:59.219956 4844 generic.go:334] "Generic (PLEG): container finished" podID="066dc227-5be8-415c-a18c-107f8da1559b" containerID="bf1c5d48a2a33b5b04f501e375f95797c316b1c8a0b604d586e79b1434423a75" exitCode=0 Jan 26 13:37:59 crc kubenswrapper[4844]: I0126 13:37:59.220056 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqdmc" event={"ID":"066dc227-5be8-415c-a18c-107f8da1559b","Type":"ContainerDied","Data":"bf1c5d48a2a33b5b04f501e375f95797c316b1c8a0b604d586e79b1434423a75"} Jan 26 13:38:00 crc kubenswrapper[4844]: I0126 13:38:00.231510 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqdmc" event={"ID":"066dc227-5be8-415c-a18c-107f8da1559b","Type":"ContainerStarted","Data":"06fe033a6238c7291524d3b52aa2ce20ce3dc01356c218042f83db21ff48bca4"} Jan 26 13:38:00 crc kubenswrapper[4844]: I0126 13:38:00.248072 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hqdmc" podStartSLOduration=1.676618035 podStartE2EDuration="4.248049992s" podCreationTimestamp="2026-01-26 13:37:56 +0000 UTC" firstStartedPulling="2026-01-26 13:37:57.180313258 +0000 UTC m=+3254.113680870" lastFinishedPulling="2026-01-26 13:37:59.751745195 +0000 UTC m=+3256.685112827" observedRunningTime="2026-01-26 13:38:00.246298479 +0000 UTC m=+3257.179666111" watchObservedRunningTime="2026-01-26 13:38:00.248049992 +0000 UTC m=+3257.181417604" Jan 26 13:38:03 crc kubenswrapper[4844]: I0126 13:38:03.213473 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m6qf4"] Jan 26 13:38:03 crc kubenswrapper[4844]: I0126 13:38:03.216138 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6qf4" Jan 26 13:38:03 crc kubenswrapper[4844]: I0126 13:38:03.236733 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m6qf4"] Jan 26 13:38:03 crc kubenswrapper[4844]: I0126 13:38:03.418444 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f59a487-ecab-4470-992a-738d76779ed6-utilities\") pod \"certified-operators-m6qf4\" (UID: \"4f59a487-ecab-4470-992a-738d76779ed6\") " pod="openshift-marketplace/certified-operators-m6qf4" Jan 26 13:38:03 crc kubenswrapper[4844]: I0126 13:38:03.418699 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htj4v\" (UniqueName: \"kubernetes.io/projected/4f59a487-ecab-4470-992a-738d76779ed6-kube-api-access-htj4v\") pod \"certified-operators-m6qf4\" (UID: \"4f59a487-ecab-4470-992a-738d76779ed6\") " pod="openshift-marketplace/certified-operators-m6qf4" Jan 26 13:38:03 crc kubenswrapper[4844]: I0126 13:38:03.419289 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f59a487-ecab-4470-992a-738d76779ed6-catalog-content\") pod \"certified-operators-m6qf4\" (UID: \"4f59a487-ecab-4470-992a-738d76779ed6\") " pod="openshift-marketplace/certified-operators-m6qf4" Jan 26 13:38:03 crc kubenswrapper[4844]: I0126 13:38:03.521645 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f59a487-ecab-4470-992a-738d76779ed6-utilities\") pod \"certified-operators-m6qf4\" (UID: \"4f59a487-ecab-4470-992a-738d76779ed6\") " pod="openshift-marketplace/certified-operators-m6qf4" Jan 26 13:38:03 crc kubenswrapper[4844]: I0126 13:38:03.521724 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htj4v\" (UniqueName: \"kubernetes.io/projected/4f59a487-ecab-4470-992a-738d76779ed6-kube-api-access-htj4v\") pod \"certified-operators-m6qf4\" (UID: \"4f59a487-ecab-4470-992a-738d76779ed6\") " pod="openshift-marketplace/certified-operators-m6qf4" Jan 26 13:38:03 crc kubenswrapper[4844]: I0126 13:38:03.521859 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f59a487-ecab-4470-992a-738d76779ed6-catalog-content\") pod \"certified-operators-m6qf4\" (UID: \"4f59a487-ecab-4470-992a-738d76779ed6\") " pod="openshift-marketplace/certified-operators-m6qf4" Jan 26 13:38:03 crc kubenswrapper[4844]: I0126 13:38:03.522677 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f59a487-ecab-4470-992a-738d76779ed6-catalog-content\") pod \"certified-operators-m6qf4\" (UID: \"4f59a487-ecab-4470-992a-738d76779ed6\") " pod="openshift-marketplace/certified-operators-m6qf4" Jan 26 13:38:03 crc kubenswrapper[4844]: I0126 13:38:03.522907 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f59a487-ecab-4470-992a-738d76779ed6-utilities\") pod \"certified-operators-m6qf4\" (UID: \"4f59a487-ecab-4470-992a-738d76779ed6\") " pod="openshift-marketplace/certified-operators-m6qf4" Jan 26 13:38:03 crc kubenswrapper[4844]: I0126 13:38:03.542110 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htj4v\" (UniqueName: \"kubernetes.io/projected/4f59a487-ecab-4470-992a-738d76779ed6-kube-api-access-htj4v\") pod \"certified-operators-m6qf4\" (UID: \"4f59a487-ecab-4470-992a-738d76779ed6\") " pod="openshift-marketplace/certified-operators-m6qf4" Jan 26 13:38:03 crc kubenswrapper[4844]: I0126 13:38:03.837064 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6qf4" Jan 26 13:38:04 crc kubenswrapper[4844]: I0126 13:38:04.379817 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m6qf4"] Jan 26 13:38:05 crc kubenswrapper[4844]: I0126 13:38:05.288544 4844 generic.go:334] "Generic (PLEG): container finished" podID="4f59a487-ecab-4470-992a-738d76779ed6" containerID="83a8b47b48880043bda1c643ef0de0cd202df40560fc95063bbde1cc5ec52def" exitCode=0 Jan 26 13:38:05 crc kubenswrapper[4844]: I0126 13:38:05.288640 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6qf4" event={"ID":"4f59a487-ecab-4470-992a-738d76779ed6","Type":"ContainerDied","Data":"83a8b47b48880043bda1c643ef0de0cd202df40560fc95063bbde1cc5ec52def"} Jan 26 13:38:05 crc kubenswrapper[4844]: I0126 13:38:05.288851 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6qf4" event={"ID":"4f59a487-ecab-4470-992a-738d76779ed6","Type":"ContainerStarted","Data":"f6918892b5540bbf9985ff5207c77c3cc7dc037808c07c990b1a316e25818149"} Jan 26 13:38:06 crc kubenswrapper[4844]: I0126 13:38:06.303812 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6qf4" event={"ID":"4f59a487-ecab-4470-992a-738d76779ed6","Type":"ContainerStarted","Data":"9c71a8294dd81274eb83b5ee83659543b3b6d7a8453128b25440e7aad79e1adb"} Jan 26 13:38:06 crc kubenswrapper[4844]: I0126 13:38:06.413822 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hqdmc" Jan 26 13:38:06 crc kubenswrapper[4844]: I0126 13:38:06.413884 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hqdmc" Jan 26 13:38:06 crc kubenswrapper[4844]: I0126 13:38:06.474221 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hqdmc" Jan 26 13:38:07 crc kubenswrapper[4844]: I0126 13:38:07.317933 4844 generic.go:334] "Generic (PLEG): container finished" podID="4f59a487-ecab-4470-992a-738d76779ed6" containerID="9c71a8294dd81274eb83b5ee83659543b3b6d7a8453128b25440e7aad79e1adb" exitCode=0 Jan 26 13:38:07 crc kubenswrapper[4844]: I0126 13:38:07.345911 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6qf4" event={"ID":"4f59a487-ecab-4470-992a-738d76779ed6","Type":"ContainerDied","Data":"9c71a8294dd81274eb83b5ee83659543b3b6d7a8453128b25440e7aad79e1adb"} Jan 26 13:38:07 crc kubenswrapper[4844]: I0126 13:38:07.380168 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hqdmc" Jan 26 13:38:08 crc kubenswrapper[4844]: I0126 13:38:08.333950 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6qf4" event={"ID":"4f59a487-ecab-4470-992a-738d76779ed6","Type":"ContainerStarted","Data":"893d9af96d6e19be675f0fb913a6eaea43466a227ab5847527be0fe10deefa47"} Jan 26 13:38:08 crc kubenswrapper[4844]: I0126 13:38:08.368685 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m6qf4" podStartSLOduration=2.947303033 podStartE2EDuration="5.368652015s" podCreationTimestamp="2026-01-26 13:38:03 +0000 UTC" firstStartedPulling="2026-01-26 13:38:05.290447146 +0000 UTC m=+3262.223814778" lastFinishedPulling="2026-01-26 13:38:07.711796148 +0000 UTC m=+3264.645163760" observedRunningTime="2026-01-26 13:38:08.357856133 +0000 UTC m=+3265.291223815" watchObservedRunningTime="2026-01-26 13:38:08.368652015 +0000 UTC m=+3265.302019667" Jan 26 13:38:08 crc kubenswrapper[4844]: I0126 13:38:08.801999 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hqdmc"] Jan 26 13:38:09 crc kubenswrapper[4844]: I0126 13:38:09.342753 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hqdmc" podUID="066dc227-5be8-415c-a18c-107f8da1559b" containerName="registry-server" containerID="cri-o://06fe033a6238c7291524d3b52aa2ce20ce3dc01356c218042f83db21ff48bca4" gracePeriod=2 Jan 26 13:38:10 crc kubenswrapper[4844]: I0126 13:38:10.363458 4844 generic.go:334] "Generic (PLEG): container finished" podID="066dc227-5be8-415c-a18c-107f8da1559b" containerID="06fe033a6238c7291524d3b52aa2ce20ce3dc01356c218042f83db21ff48bca4" exitCode=0 Jan 26 13:38:10 crc kubenswrapper[4844]: I0126 13:38:10.363560 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqdmc" event={"ID":"066dc227-5be8-415c-a18c-107f8da1559b","Type":"ContainerDied","Data":"06fe033a6238c7291524d3b52aa2ce20ce3dc01356c218042f83db21ff48bca4"} Jan 26 13:38:10 crc kubenswrapper[4844]: I0126 13:38:10.363903 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqdmc" event={"ID":"066dc227-5be8-415c-a18c-107f8da1559b","Type":"ContainerDied","Data":"5bd49916b87e0ffadfec6e7084f3a1fd79686b8b546d4f74ad938037dd9b528c"} Jan 26 13:38:10 crc kubenswrapper[4844]: I0126 13:38:10.363930 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bd49916b87e0ffadfec6e7084f3a1fd79686b8b546d4f74ad938037dd9b528c" Jan 26 13:38:10 crc kubenswrapper[4844]: I0126 13:38:10.394879 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqdmc" Jan 26 13:38:10 crc kubenswrapper[4844]: I0126 13:38:10.578739 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/066dc227-5be8-415c-a18c-107f8da1559b-catalog-content\") pod \"066dc227-5be8-415c-a18c-107f8da1559b\" (UID: \"066dc227-5be8-415c-a18c-107f8da1559b\") " Jan 26 13:38:10 crc kubenswrapper[4844]: I0126 13:38:10.579106 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/066dc227-5be8-415c-a18c-107f8da1559b-utilities\") pod \"066dc227-5be8-415c-a18c-107f8da1559b\" (UID: \"066dc227-5be8-415c-a18c-107f8da1559b\") " Jan 26 13:38:10 crc kubenswrapper[4844]: I0126 13:38:10.579352 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tzm4\" (UniqueName: \"kubernetes.io/projected/066dc227-5be8-415c-a18c-107f8da1559b-kube-api-access-7tzm4\") pod \"066dc227-5be8-415c-a18c-107f8da1559b\" (UID: \"066dc227-5be8-415c-a18c-107f8da1559b\") " Jan 26 13:38:10 crc kubenswrapper[4844]: I0126 13:38:10.580841 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/066dc227-5be8-415c-a18c-107f8da1559b-utilities" (OuterVolumeSpecName: "utilities") pod "066dc227-5be8-415c-a18c-107f8da1559b" (UID: "066dc227-5be8-415c-a18c-107f8da1559b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:38:10 crc kubenswrapper[4844]: I0126 13:38:10.589112 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/066dc227-5be8-415c-a18c-107f8da1559b-kube-api-access-7tzm4" (OuterVolumeSpecName: "kube-api-access-7tzm4") pod "066dc227-5be8-415c-a18c-107f8da1559b" (UID: "066dc227-5be8-415c-a18c-107f8da1559b"). InnerVolumeSpecName "kube-api-access-7tzm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:38:10 crc kubenswrapper[4844]: I0126 13:38:10.663526 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/066dc227-5be8-415c-a18c-107f8da1559b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "066dc227-5be8-415c-a18c-107f8da1559b" (UID: "066dc227-5be8-415c-a18c-107f8da1559b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:38:10 crc kubenswrapper[4844]: I0126 13:38:10.683023 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/066dc227-5be8-415c-a18c-107f8da1559b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:38:10 crc kubenswrapper[4844]: I0126 13:38:10.683085 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tzm4\" (UniqueName: \"kubernetes.io/projected/066dc227-5be8-415c-a18c-107f8da1559b-kube-api-access-7tzm4\") on node \"crc\" DevicePath \"\"" Jan 26 13:38:10 crc kubenswrapper[4844]: I0126 13:38:10.683099 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/066dc227-5be8-415c-a18c-107f8da1559b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:38:11 crc kubenswrapper[4844]: I0126 13:38:11.375877 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqdmc" Jan 26 13:38:11 crc kubenswrapper[4844]: I0126 13:38:11.406181 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hqdmc"] Jan 26 13:38:11 crc kubenswrapper[4844]: I0126 13:38:11.413796 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hqdmc"] Jan 26 13:38:13 crc kubenswrapper[4844]: I0126 13:38:13.335229 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="066dc227-5be8-415c-a18c-107f8da1559b" path="/var/lib/kubelet/pods/066dc227-5be8-415c-a18c-107f8da1559b/volumes" Jan 26 13:38:13 crc kubenswrapper[4844]: I0126 13:38:13.837503 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m6qf4" Jan 26 13:38:13 crc kubenswrapper[4844]: I0126 13:38:13.837579 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m6qf4" Jan 26 13:38:13 crc kubenswrapper[4844]: I0126 13:38:13.902467 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m6qf4" Jan 26 13:38:14 crc kubenswrapper[4844]: I0126 13:38:14.475107 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m6qf4" Jan 26 13:38:14 crc kubenswrapper[4844]: I0126 13:38:14.816768 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m6qf4"] Jan 26 13:38:16 crc kubenswrapper[4844]: I0126 13:38:16.435752 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m6qf4" podUID="4f59a487-ecab-4470-992a-738d76779ed6" containerName="registry-server" containerID="cri-o://893d9af96d6e19be675f0fb913a6eaea43466a227ab5847527be0fe10deefa47" gracePeriod=2 Jan 26 13:38:17 crc kubenswrapper[4844]: I0126 13:38:17.449102 4844 generic.go:334] "Generic (PLEG): container finished" podID="4f59a487-ecab-4470-992a-738d76779ed6" containerID="893d9af96d6e19be675f0fb913a6eaea43466a227ab5847527be0fe10deefa47" exitCode=0 Jan 26 13:38:17 crc kubenswrapper[4844]: I0126 13:38:17.449154 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6qf4" event={"ID":"4f59a487-ecab-4470-992a-738d76779ed6","Type":"ContainerDied","Data":"893d9af96d6e19be675f0fb913a6eaea43466a227ab5847527be0fe10deefa47"} Jan 26 13:38:17 crc kubenswrapper[4844]: I0126 13:38:17.895053 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6qf4" Jan 26 13:38:18 crc kubenswrapper[4844]: I0126 13:38:18.048029 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f59a487-ecab-4470-992a-738d76779ed6-utilities\") pod \"4f59a487-ecab-4470-992a-738d76779ed6\" (UID: \"4f59a487-ecab-4470-992a-738d76779ed6\") " Jan 26 13:38:18 crc kubenswrapper[4844]: I0126 13:38:18.048468 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htj4v\" (UniqueName: \"kubernetes.io/projected/4f59a487-ecab-4470-992a-738d76779ed6-kube-api-access-htj4v\") pod \"4f59a487-ecab-4470-992a-738d76779ed6\" (UID: \"4f59a487-ecab-4470-992a-738d76779ed6\") " Jan 26 13:38:18 crc kubenswrapper[4844]: I0126 13:38:18.048577 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f59a487-ecab-4470-992a-738d76779ed6-catalog-content\") pod \"4f59a487-ecab-4470-992a-738d76779ed6\" (UID: \"4f59a487-ecab-4470-992a-738d76779ed6\") " Jan 26 13:38:18 crc kubenswrapper[4844]: I0126 13:38:18.050897 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f59a487-ecab-4470-992a-738d76779ed6-utilities" (OuterVolumeSpecName: "utilities") pod "4f59a487-ecab-4470-992a-738d76779ed6" (UID: "4f59a487-ecab-4470-992a-738d76779ed6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:38:18 crc kubenswrapper[4844]: I0126 13:38:18.056165 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f59a487-ecab-4470-992a-738d76779ed6-kube-api-access-htj4v" (OuterVolumeSpecName: "kube-api-access-htj4v") pod "4f59a487-ecab-4470-992a-738d76779ed6" (UID: "4f59a487-ecab-4470-992a-738d76779ed6"). InnerVolumeSpecName "kube-api-access-htj4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:38:18 crc kubenswrapper[4844]: I0126 13:38:18.117772 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f59a487-ecab-4470-992a-738d76779ed6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f59a487-ecab-4470-992a-738d76779ed6" (UID: "4f59a487-ecab-4470-992a-738d76779ed6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:38:18 crc kubenswrapper[4844]: I0126 13:38:18.154753 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f59a487-ecab-4470-992a-738d76779ed6-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:38:18 crc kubenswrapper[4844]: I0126 13:38:18.154793 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htj4v\" (UniqueName: \"kubernetes.io/projected/4f59a487-ecab-4470-992a-738d76779ed6-kube-api-access-htj4v\") on node \"crc\" DevicePath \"\"" Jan 26 13:38:18 crc kubenswrapper[4844]: I0126 13:38:18.154806 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f59a487-ecab-4470-992a-738d76779ed6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:38:18 crc kubenswrapper[4844]: I0126 13:38:18.464430 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6qf4" event={"ID":"4f59a487-ecab-4470-992a-738d76779ed6","Type":"ContainerDied","Data":"f6918892b5540bbf9985ff5207c77c3cc7dc037808c07c990b1a316e25818149"} Jan 26 13:38:18 crc kubenswrapper[4844]: I0126 13:38:18.464545 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6qf4" Jan 26 13:38:18 crc kubenswrapper[4844]: I0126 13:38:18.465870 4844 scope.go:117] "RemoveContainer" containerID="893d9af96d6e19be675f0fb913a6eaea43466a227ab5847527be0fe10deefa47" Jan 26 13:38:18 crc kubenswrapper[4844]: I0126 13:38:18.502930 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m6qf4"] Jan 26 13:38:18 crc kubenswrapper[4844]: I0126 13:38:18.507958 4844 scope.go:117] "RemoveContainer" containerID="9c71a8294dd81274eb83b5ee83659543b3b6d7a8453128b25440e7aad79e1adb" Jan 26 13:38:18 crc kubenswrapper[4844]: I0126 13:38:18.531484 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m6qf4"] Jan 26 13:38:18 crc kubenswrapper[4844]: I0126 13:38:18.537938 4844 scope.go:117] "RemoveContainer" containerID="83a8b47b48880043bda1c643ef0de0cd202df40560fc95063bbde1cc5ec52def" Jan 26 13:38:19 crc kubenswrapper[4844]: I0126 13:38:19.328474 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f59a487-ecab-4470-992a-738d76779ed6" path="/var/lib/kubelet/pods/4f59a487-ecab-4470-992a-738d76779ed6/volumes" Jan 26 13:40:06 crc kubenswrapper[4844]: I0126 13:40:06.364741 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:40:06 crc kubenswrapper[4844]: I0126 13:40:06.365426 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:40:36 crc kubenswrapper[4844]: I0126 13:40:36.364880 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:40:36 crc kubenswrapper[4844]: I0126 13:40:36.366191 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:41:06 crc kubenswrapper[4844]: I0126 13:41:06.365373 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:41:06 crc kubenswrapper[4844]: I0126 13:41:06.365979 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:41:06 crc kubenswrapper[4844]: I0126 13:41:06.366029 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 13:41:06 crc kubenswrapper[4844]: I0126 13:41:06.366888 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f2e320dd842af2d2fc73841752821ae2e65a052ea9aa96d77b135f1559a71cd"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:41:06 crc kubenswrapper[4844]: I0126 13:41:06.366949 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://7f2e320dd842af2d2fc73841752821ae2e65a052ea9aa96d77b135f1559a71cd" gracePeriod=600 Jan 26 13:41:07 crc kubenswrapper[4844]: I0126 13:41:07.325441 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="7f2e320dd842af2d2fc73841752821ae2e65a052ea9aa96d77b135f1559a71cd" exitCode=0 Jan 26 13:41:07 crc kubenswrapper[4844]: I0126 13:41:07.326947 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"7f2e320dd842af2d2fc73841752821ae2e65a052ea9aa96d77b135f1559a71cd"} Jan 26 13:41:07 crc kubenswrapper[4844]: I0126 13:41:07.327006 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0"} Jan 26 13:41:07 crc kubenswrapper[4844]: I0126 13:41:07.327034 4844 scope.go:117] "RemoveContainer" containerID="1e5683696cedef0e24e380cd8f5a01d2be8ea7dd57619e21972834c26754b83e" Jan 26 13:42:10 crc kubenswrapper[4844]: I0126 13:42:10.045451 4844 generic.go:334] "Generic (PLEG): container finished" podID="2d88214a-d4b9-4885-ac32-cae7c7dcd3ba" containerID="2fc60d06c7909c26958e2509ce3b00908af31761320ad113715778d88207da11" exitCode=0 Jan 26 13:42:10 crc kubenswrapper[4844]: I0126 13:42:10.046058 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" event={"ID":"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba","Type":"ContainerDied","Data":"2fc60d06c7909c26958e2509ce3b00908af31761320ad113715778d88207da11"} Jan 26 13:42:11 crc kubenswrapper[4844]: I0126 13:42:11.544023 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:42:11 crc kubenswrapper[4844]: I0126 13:42:11.635197 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-libvirt-combined-ca-bundle\") pod \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " Jan 26 13:42:11 crc kubenswrapper[4844]: I0126 13:42:11.635284 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-ssh-key-openstack-edpm-ipam\") pod \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " Jan 26 13:42:11 crc kubenswrapper[4844]: I0126 13:42:11.635327 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-inventory\") pod \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " Jan 26 13:42:11 crc kubenswrapper[4844]: I0126 13:42:11.635391 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2m4k\" (UniqueName: \"kubernetes.io/projected/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-kube-api-access-z2m4k\") pod \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " Jan 26 13:42:11 crc kubenswrapper[4844]: I0126 13:42:11.635440 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-libvirt-secret-0\") pod \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\" (UID: \"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba\") " Jan 26 13:42:11 crc kubenswrapper[4844]: I0126 13:42:11.642039 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2d88214a-d4b9-4885-ac32-cae7c7dcd3ba" (UID: "2d88214a-d4b9-4885-ac32-cae7c7dcd3ba"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:42:11 crc kubenswrapper[4844]: I0126 13:42:11.643759 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-kube-api-access-z2m4k" (OuterVolumeSpecName: "kube-api-access-z2m4k") pod "2d88214a-d4b9-4885-ac32-cae7c7dcd3ba" (UID: "2d88214a-d4b9-4885-ac32-cae7c7dcd3ba"). InnerVolumeSpecName "kube-api-access-z2m4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:42:11 crc kubenswrapper[4844]: I0126 13:42:11.668091 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-inventory" (OuterVolumeSpecName: "inventory") pod "2d88214a-d4b9-4885-ac32-cae7c7dcd3ba" (UID: "2d88214a-d4b9-4885-ac32-cae7c7dcd3ba"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:42:11 crc kubenswrapper[4844]: I0126 13:42:11.670894 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "2d88214a-d4b9-4885-ac32-cae7c7dcd3ba" (UID: "2d88214a-d4b9-4885-ac32-cae7c7dcd3ba"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:42:11 crc kubenswrapper[4844]: I0126 13:42:11.671063 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2d88214a-d4b9-4885-ac32-cae7c7dcd3ba" (UID: "2d88214a-d4b9-4885-ac32-cae7c7dcd3ba"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:42:11 crc kubenswrapper[4844]: I0126 13:42:11.737805 4844 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:42:11 crc kubenswrapper[4844]: I0126 13:42:11.737840 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2m4k\" (UniqueName: \"kubernetes.io/projected/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-kube-api-access-z2m4k\") on node \"crc\" DevicePath \"\"" Jan 26 13:42:11 crc kubenswrapper[4844]: I0126 13:42:11.737852 4844 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:42:11 crc kubenswrapper[4844]: I0126 13:42:11.737863 4844 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:42:11 crc kubenswrapper[4844]: I0126 13:42:11.737872 4844 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d88214a-d4b9-4885-ac32-cae7c7dcd3ba-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.083556 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" event={"ID":"2d88214a-d4b9-4885-ac32-cae7c7dcd3ba","Type":"ContainerDied","Data":"150caf3d8cd7227931dd113670c698a2cf65eb2396140022b426116bdb158784"} Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.083623 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="150caf3d8cd7227931dd113670c698a2cf65eb2396140022b426116bdb158784" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.083668 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sttdt" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.199555 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw"] Jan 26 13:42:12 crc kubenswrapper[4844]: E0126 13:42:12.200018 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="066dc227-5be8-415c-a18c-107f8da1559b" containerName="registry-server" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.200039 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="066dc227-5be8-415c-a18c-107f8da1559b" containerName="registry-server" Jan 26 13:42:12 crc kubenswrapper[4844]: E0126 13:42:12.200063 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="066dc227-5be8-415c-a18c-107f8da1559b" containerName="extract-content" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.200071 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="066dc227-5be8-415c-a18c-107f8da1559b" containerName="extract-content" Jan 26 13:42:12 crc kubenswrapper[4844]: E0126 13:42:12.200086 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f59a487-ecab-4470-992a-738d76779ed6" containerName="extract-content" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.200094 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f59a487-ecab-4470-992a-738d76779ed6" containerName="extract-content" Jan 26 13:42:12 crc kubenswrapper[4844]: E0126 13:42:12.200108 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f59a487-ecab-4470-992a-738d76779ed6" containerName="registry-server" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.200116 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f59a487-ecab-4470-992a-738d76779ed6" containerName="registry-server" Jan 26 13:42:12 crc kubenswrapper[4844]: E0126 13:42:12.200134 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="066dc227-5be8-415c-a18c-107f8da1559b" containerName="extract-utilities" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.200143 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="066dc227-5be8-415c-a18c-107f8da1559b" containerName="extract-utilities" Jan 26 13:42:12 crc kubenswrapper[4844]: E0126 13:42:12.200156 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f59a487-ecab-4470-992a-738d76779ed6" containerName="extract-utilities" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.200165 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f59a487-ecab-4470-992a-738d76779ed6" containerName="extract-utilities" Jan 26 13:42:12 crc kubenswrapper[4844]: E0126 13:42:12.200182 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d88214a-d4b9-4885-ac32-cae7c7dcd3ba" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.200191 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d88214a-d4b9-4885-ac32-cae7c7dcd3ba" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.200454 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="066dc227-5be8-415c-a18c-107f8da1559b" containerName="registry-server" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.200469 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f59a487-ecab-4470-992a-738d76779ed6" containerName="registry-server" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.200482 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d88214a-d4b9-4885-ac32-cae7c7dcd3ba" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.201265 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.204969 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.205226 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r4j2z" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.205749 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.206193 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.207091 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.207209 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.207813 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.218945 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw"] Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.246388 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.246443 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.246482 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.246619 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.246642 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l65dc\" (UniqueName: \"kubernetes.io/projected/421111b7-6358-404a-b57f-b6529eb910f9-kube-api-access-l65dc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.246687 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.246736 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.246770 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/421111b7-6358-404a-b57f-b6529eb910f9-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.246789 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.348432 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.348495 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l65dc\" (UniqueName: \"kubernetes.io/projected/421111b7-6358-404a-b57f-b6529eb910f9-kube-api-access-l65dc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.349080 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.349214 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.349793 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/421111b7-6358-404a-b57f-b6529eb910f9-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.349838 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.349942 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.350561 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.350724 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.350780 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/421111b7-6358-404a-b57f-b6529eb910f9-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.353356 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.354160 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.354699 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.354890 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.355321 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.356743 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.356954 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.372579 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l65dc\" (UniqueName: \"kubernetes.io/projected/421111b7-6358-404a-b57f-b6529eb910f9-kube-api-access-l65dc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2xrbw\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.541513 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:42:12 crc kubenswrapper[4844]: I0126 13:42:12.968801 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw"] Jan 26 13:42:13 crc kubenswrapper[4844]: I0126 13:42:13.093287 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" event={"ID":"421111b7-6358-404a-b57f-b6529eb910f9","Type":"ContainerStarted","Data":"ba68a299af2a26925d82f5647ff972056ee04860104e1ed9d8cbafd2c110499d"} Jan 26 13:42:14 crc kubenswrapper[4844]: I0126 13:42:14.104158 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" event={"ID":"421111b7-6358-404a-b57f-b6529eb910f9","Type":"ContainerStarted","Data":"e8c9a6ea50a57ea40fc6569fd5ad7cf3957962addebff6b9cb7f5235df7d8223"} Jan 26 13:42:14 crc kubenswrapper[4844]: I0126 13:42:14.125199 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" podStartSLOduration=1.426003796 podStartE2EDuration="2.125175252s" podCreationTimestamp="2026-01-26 13:42:12 +0000 UTC" firstStartedPulling="2026-01-26 13:42:12.97046334 +0000 UTC m=+3509.903830952" lastFinishedPulling="2026-01-26 13:42:13.669634796 +0000 UTC m=+3510.603002408" observedRunningTime="2026-01-26 13:42:14.121313848 +0000 UTC m=+3511.054681480" watchObservedRunningTime="2026-01-26 13:42:14.125175252 +0000 UTC m=+3511.058542884" Jan 26 13:42:55 crc kubenswrapper[4844]: I0126 13:42:55.602766 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kdgzt"] Jan 26 13:42:55 crc kubenswrapper[4844]: I0126 13:42:55.607732 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdgzt" Jan 26 13:42:55 crc kubenswrapper[4844]: I0126 13:42:55.644181 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh5qp\" (UniqueName: \"kubernetes.io/projected/29a5cf2d-375c-4835-bfe2-b64a05d4bec0-kube-api-access-xh5qp\") pod \"redhat-operators-kdgzt\" (UID: \"29a5cf2d-375c-4835-bfe2-b64a05d4bec0\") " pod="openshift-marketplace/redhat-operators-kdgzt" Jan 26 13:42:55 crc kubenswrapper[4844]: I0126 13:42:55.644496 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a5cf2d-375c-4835-bfe2-b64a05d4bec0-utilities\") pod \"redhat-operators-kdgzt\" (UID: \"29a5cf2d-375c-4835-bfe2-b64a05d4bec0\") " pod="openshift-marketplace/redhat-operators-kdgzt" Jan 26 13:42:55 crc kubenswrapper[4844]: I0126 13:42:55.644562 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a5cf2d-375c-4835-bfe2-b64a05d4bec0-catalog-content\") pod \"redhat-operators-kdgzt\" (UID: \"29a5cf2d-375c-4835-bfe2-b64a05d4bec0\") " pod="openshift-marketplace/redhat-operators-kdgzt" Jan 26 13:42:55 crc kubenswrapper[4844]: I0126 13:42:55.644728 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kdgzt"] Jan 26 13:42:55 crc kubenswrapper[4844]: I0126 13:42:55.745911 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a5cf2d-375c-4835-bfe2-b64a05d4bec0-utilities\") pod \"redhat-operators-kdgzt\" (UID: \"29a5cf2d-375c-4835-bfe2-b64a05d4bec0\") " pod="openshift-marketplace/redhat-operators-kdgzt" Jan 26 13:42:55 crc kubenswrapper[4844]: I0126 13:42:55.746008 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a5cf2d-375c-4835-bfe2-b64a05d4bec0-catalog-content\") pod \"redhat-operators-kdgzt\" (UID: \"29a5cf2d-375c-4835-bfe2-b64a05d4bec0\") " pod="openshift-marketplace/redhat-operators-kdgzt" Jan 26 13:42:55 crc kubenswrapper[4844]: I0126 13:42:55.746164 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh5qp\" (UniqueName: \"kubernetes.io/projected/29a5cf2d-375c-4835-bfe2-b64a05d4bec0-kube-api-access-xh5qp\") pod \"redhat-operators-kdgzt\" (UID: \"29a5cf2d-375c-4835-bfe2-b64a05d4bec0\") " pod="openshift-marketplace/redhat-operators-kdgzt" Jan 26 13:42:55 crc kubenswrapper[4844]: I0126 13:42:55.746474 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a5cf2d-375c-4835-bfe2-b64a05d4bec0-utilities\") pod \"redhat-operators-kdgzt\" (UID: \"29a5cf2d-375c-4835-bfe2-b64a05d4bec0\") " pod="openshift-marketplace/redhat-operators-kdgzt" Jan 26 13:42:55 crc kubenswrapper[4844]: I0126 13:42:55.746589 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a5cf2d-375c-4835-bfe2-b64a05d4bec0-catalog-content\") pod \"redhat-operators-kdgzt\" (UID: \"29a5cf2d-375c-4835-bfe2-b64a05d4bec0\") " pod="openshift-marketplace/redhat-operators-kdgzt" Jan 26 13:42:55 crc kubenswrapper[4844]: I0126 13:42:55.767824 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh5qp\" (UniqueName: \"kubernetes.io/projected/29a5cf2d-375c-4835-bfe2-b64a05d4bec0-kube-api-access-xh5qp\") pod \"redhat-operators-kdgzt\" (UID: \"29a5cf2d-375c-4835-bfe2-b64a05d4bec0\") " pod="openshift-marketplace/redhat-operators-kdgzt" Jan 26 13:42:55 crc kubenswrapper[4844]: I0126 13:42:55.955408 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdgzt" Jan 26 13:42:56 crc kubenswrapper[4844]: I0126 13:42:56.187969 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8sgl5"] Jan 26 13:42:56 crc kubenswrapper[4844]: I0126 13:42:56.203259 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8sgl5"] Jan 26 13:42:56 crc kubenswrapper[4844]: I0126 13:42:56.203369 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8sgl5" Jan 26 13:42:56 crc kubenswrapper[4844]: I0126 13:42:56.254992 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5cnk\" (UniqueName: \"kubernetes.io/projected/4a77f101-3818-4f36-a8e9-8922afe4219f-kube-api-access-f5cnk\") pod \"redhat-marketplace-8sgl5\" (UID: \"4a77f101-3818-4f36-a8e9-8922afe4219f\") " pod="openshift-marketplace/redhat-marketplace-8sgl5" Jan 26 13:42:56 crc kubenswrapper[4844]: I0126 13:42:56.255126 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a77f101-3818-4f36-a8e9-8922afe4219f-catalog-content\") pod \"redhat-marketplace-8sgl5\" (UID: \"4a77f101-3818-4f36-a8e9-8922afe4219f\") " pod="openshift-marketplace/redhat-marketplace-8sgl5" Jan 26 13:42:56 crc kubenswrapper[4844]: I0126 13:42:56.255197 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a77f101-3818-4f36-a8e9-8922afe4219f-utilities\") pod \"redhat-marketplace-8sgl5\" (UID: \"4a77f101-3818-4f36-a8e9-8922afe4219f\") " pod="openshift-marketplace/redhat-marketplace-8sgl5" Jan 26 13:42:56 crc kubenswrapper[4844]: I0126 13:42:56.299339 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kdgzt"] Jan 26 13:42:56 crc kubenswrapper[4844]: I0126 13:42:56.357677 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a77f101-3818-4f36-a8e9-8922afe4219f-utilities\") pod \"redhat-marketplace-8sgl5\" (UID: \"4a77f101-3818-4f36-a8e9-8922afe4219f\") " pod="openshift-marketplace/redhat-marketplace-8sgl5" Jan 26 13:42:56 crc kubenswrapper[4844]: I0126 13:42:56.357848 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5cnk\" (UniqueName: \"kubernetes.io/projected/4a77f101-3818-4f36-a8e9-8922afe4219f-kube-api-access-f5cnk\") pod \"redhat-marketplace-8sgl5\" (UID: \"4a77f101-3818-4f36-a8e9-8922afe4219f\") " pod="openshift-marketplace/redhat-marketplace-8sgl5" Jan 26 13:42:56 crc kubenswrapper[4844]: I0126 13:42:56.358018 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a77f101-3818-4f36-a8e9-8922afe4219f-catalog-content\") pod \"redhat-marketplace-8sgl5\" (UID: \"4a77f101-3818-4f36-a8e9-8922afe4219f\") " pod="openshift-marketplace/redhat-marketplace-8sgl5" Jan 26 13:42:56 crc kubenswrapper[4844]: I0126 13:42:56.358483 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a77f101-3818-4f36-a8e9-8922afe4219f-catalog-content\") pod \"redhat-marketplace-8sgl5\" (UID: \"4a77f101-3818-4f36-a8e9-8922afe4219f\") " pod="openshift-marketplace/redhat-marketplace-8sgl5" Jan 26 13:42:56 crc kubenswrapper[4844]: I0126 13:42:56.358769 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a77f101-3818-4f36-a8e9-8922afe4219f-utilities\") pod \"redhat-marketplace-8sgl5\" (UID: \"4a77f101-3818-4f36-a8e9-8922afe4219f\") " pod="openshift-marketplace/redhat-marketplace-8sgl5" Jan 26 13:42:56 crc kubenswrapper[4844]: I0126 13:42:56.378093 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5cnk\" (UniqueName: \"kubernetes.io/projected/4a77f101-3818-4f36-a8e9-8922afe4219f-kube-api-access-f5cnk\") pod \"redhat-marketplace-8sgl5\" (UID: \"4a77f101-3818-4f36-a8e9-8922afe4219f\") " pod="openshift-marketplace/redhat-marketplace-8sgl5" Jan 26 13:42:56 crc kubenswrapper[4844]: I0126 13:42:56.536839 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8sgl5" Jan 26 13:42:56 crc kubenswrapper[4844]: I0126 13:42:56.649727 4844 generic.go:334] "Generic (PLEG): container finished" podID="29a5cf2d-375c-4835-bfe2-b64a05d4bec0" containerID="9d95431f9ee264953320ce911312608835ab39e7e951ffbc17aea873a5b1edec" exitCode=0 Jan 26 13:42:56 crc kubenswrapper[4844]: I0126 13:42:56.650117 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdgzt" event={"ID":"29a5cf2d-375c-4835-bfe2-b64a05d4bec0","Type":"ContainerDied","Data":"9d95431f9ee264953320ce911312608835ab39e7e951ffbc17aea873a5b1edec"} Jan 26 13:42:56 crc kubenswrapper[4844]: I0126 13:42:56.650144 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdgzt" event={"ID":"29a5cf2d-375c-4835-bfe2-b64a05d4bec0","Type":"ContainerStarted","Data":"5977f42c7fbcd5de9d69a727e6adf2afd28c20a2cf7284ab53a00e740d1be937"} Jan 26 13:42:56 crc kubenswrapper[4844]: I0126 13:42:56.981662 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8sgl5"] Jan 26 13:42:57 crc kubenswrapper[4844]: I0126 13:42:57.666277 4844 generic.go:334] "Generic (PLEG): container finished" podID="4a77f101-3818-4f36-a8e9-8922afe4219f" containerID="5bf9cfe17ed7f02528289a7a5154ac5039fda74855732c5203ea99cc59373556" exitCode=0 Jan 26 13:42:57 crc kubenswrapper[4844]: I0126 13:42:57.666888 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8sgl5" event={"ID":"4a77f101-3818-4f36-a8e9-8922afe4219f","Type":"ContainerDied","Data":"5bf9cfe17ed7f02528289a7a5154ac5039fda74855732c5203ea99cc59373556"} Jan 26 13:42:57 crc kubenswrapper[4844]: I0126 13:42:57.666935 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8sgl5" event={"ID":"4a77f101-3818-4f36-a8e9-8922afe4219f","Type":"ContainerStarted","Data":"0896e56f0fe6ead52165f63bca2ef2792e5b67ac3843e9a35aae1729cda53add"} Jan 26 13:42:57 crc kubenswrapper[4844]: I0126 13:42:57.685996 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 13:42:58 crc kubenswrapper[4844]: I0126 13:42:58.679949 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdgzt" event={"ID":"29a5cf2d-375c-4835-bfe2-b64a05d4bec0","Type":"ContainerStarted","Data":"162ca06f7ef2051fdc2665285c92be7cd1bda6d93d3e07edc49d0b31de12eb48"} Jan 26 13:42:58 crc kubenswrapper[4844]: I0126 13:42:58.683933 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8sgl5" event={"ID":"4a77f101-3818-4f36-a8e9-8922afe4219f","Type":"ContainerStarted","Data":"546aca88cb7908ad16d42bb4241b21e93960d9528e80fd24ef3e86db5fa74022"} Jan 26 13:42:59 crc kubenswrapper[4844]: I0126 13:42:59.697408 4844 generic.go:334] "Generic (PLEG): container finished" podID="29a5cf2d-375c-4835-bfe2-b64a05d4bec0" containerID="162ca06f7ef2051fdc2665285c92be7cd1bda6d93d3e07edc49d0b31de12eb48" exitCode=0 Jan 26 13:42:59 crc kubenswrapper[4844]: I0126 13:42:59.697492 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdgzt" event={"ID":"29a5cf2d-375c-4835-bfe2-b64a05d4bec0","Type":"ContainerDied","Data":"162ca06f7ef2051fdc2665285c92be7cd1bda6d93d3e07edc49d0b31de12eb48"} Jan 26 13:43:00 crc kubenswrapper[4844]: I0126 13:43:00.707349 4844 generic.go:334] "Generic (PLEG): container finished" podID="4a77f101-3818-4f36-a8e9-8922afe4219f" containerID="546aca88cb7908ad16d42bb4241b21e93960d9528e80fd24ef3e86db5fa74022" exitCode=0 Jan 26 13:43:00 crc kubenswrapper[4844]: I0126 13:43:00.707428 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8sgl5" event={"ID":"4a77f101-3818-4f36-a8e9-8922afe4219f","Type":"ContainerDied","Data":"546aca88cb7908ad16d42bb4241b21e93960d9528e80fd24ef3e86db5fa74022"} Jan 26 13:43:00 crc kubenswrapper[4844]: I0126 13:43:00.710051 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdgzt" event={"ID":"29a5cf2d-375c-4835-bfe2-b64a05d4bec0","Type":"ContainerStarted","Data":"a197a056b4c24a792dc14ee4bbc23100ddd0b28cf8c79662f7786444c74b9b5e"} Jan 26 13:43:00 crc kubenswrapper[4844]: I0126 13:43:00.759708 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kdgzt" podStartSLOduration=2.080877362 podStartE2EDuration="5.759690001s" podCreationTimestamp="2026-01-26 13:42:55 +0000 UTC" firstStartedPulling="2026-01-26 13:42:56.653812328 +0000 UTC m=+3553.587179940" lastFinishedPulling="2026-01-26 13:43:00.332624937 +0000 UTC m=+3557.265992579" observedRunningTime="2026-01-26 13:43:00.754477664 +0000 UTC m=+3557.687845296" watchObservedRunningTime="2026-01-26 13:43:00.759690001 +0000 UTC m=+3557.693057623" Jan 26 13:43:03 crc kubenswrapper[4844]: I0126 13:43:03.745705 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8sgl5" event={"ID":"4a77f101-3818-4f36-a8e9-8922afe4219f","Type":"ContainerStarted","Data":"613247f1cd3d1b7c10be9d0e8181089cee6d866f61b1bddcfb510209230c9a7b"} Jan 26 13:43:03 crc kubenswrapper[4844]: I0126 13:43:03.769949 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8sgl5" podStartSLOduration=2.423482419 podStartE2EDuration="7.769931129s" podCreationTimestamp="2026-01-26 13:42:56 +0000 UTC" firstStartedPulling="2026-01-26 13:42:57.685504161 +0000 UTC m=+3554.618871803" lastFinishedPulling="2026-01-26 13:43:03.031952891 +0000 UTC m=+3559.965320513" observedRunningTime="2026-01-26 13:43:03.768605957 +0000 UTC m=+3560.701973569" watchObservedRunningTime="2026-01-26 13:43:03.769931129 +0000 UTC m=+3560.703298741" Jan 26 13:43:05 crc kubenswrapper[4844]: I0126 13:43:05.955526 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kdgzt" Jan 26 13:43:05 crc kubenswrapper[4844]: I0126 13:43:05.956170 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kdgzt" Jan 26 13:43:06 crc kubenswrapper[4844]: I0126 13:43:06.364842 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:43:06 crc kubenswrapper[4844]: I0126 13:43:06.365292 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:43:06 crc kubenswrapper[4844]: I0126 13:43:06.537477 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8sgl5" Jan 26 13:43:06 crc kubenswrapper[4844]: I0126 13:43:06.537548 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8sgl5" Jan 26 13:43:06 crc kubenswrapper[4844]: I0126 13:43:06.622509 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8sgl5" Jan 26 13:43:07 crc kubenswrapper[4844]: I0126 13:43:07.027348 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kdgzt" podUID="29a5cf2d-375c-4835-bfe2-b64a05d4bec0" containerName="registry-server" probeResult="failure" output=< Jan 26 13:43:07 crc kubenswrapper[4844]: timeout: failed to connect service ":50051" within 1s Jan 26 13:43:07 crc kubenswrapper[4844]: > Jan 26 13:43:16 crc kubenswrapper[4844]: I0126 13:43:16.027579 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kdgzt" Jan 26 13:43:16 crc kubenswrapper[4844]: I0126 13:43:16.107778 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kdgzt" Jan 26 13:43:16 crc kubenswrapper[4844]: I0126 13:43:16.625475 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8sgl5" Jan 26 13:43:19 crc kubenswrapper[4844]: I0126 13:43:19.995538 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kdgzt"] Jan 26 13:43:19 crc kubenswrapper[4844]: I0126 13:43:19.996865 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kdgzt" podUID="29a5cf2d-375c-4835-bfe2-b64a05d4bec0" containerName="registry-server" containerID="cri-o://a197a056b4c24a792dc14ee4bbc23100ddd0b28cf8c79662f7786444c74b9b5e" gracePeriod=2 Jan 26 13:43:20 crc kubenswrapper[4844]: I0126 13:43:20.459307 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdgzt" Jan 26 13:43:20 crc kubenswrapper[4844]: I0126 13:43:20.601764 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh5qp\" (UniqueName: \"kubernetes.io/projected/29a5cf2d-375c-4835-bfe2-b64a05d4bec0-kube-api-access-xh5qp\") pod \"29a5cf2d-375c-4835-bfe2-b64a05d4bec0\" (UID: \"29a5cf2d-375c-4835-bfe2-b64a05d4bec0\") " Jan 26 13:43:20 crc kubenswrapper[4844]: I0126 13:43:20.601873 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a5cf2d-375c-4835-bfe2-b64a05d4bec0-utilities\") pod \"29a5cf2d-375c-4835-bfe2-b64a05d4bec0\" (UID: \"29a5cf2d-375c-4835-bfe2-b64a05d4bec0\") " Jan 26 13:43:20 crc kubenswrapper[4844]: I0126 13:43:20.602212 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a5cf2d-375c-4835-bfe2-b64a05d4bec0-catalog-content\") pod \"29a5cf2d-375c-4835-bfe2-b64a05d4bec0\" (UID: \"29a5cf2d-375c-4835-bfe2-b64a05d4bec0\") " Jan 26 13:43:20 crc kubenswrapper[4844]: I0126 13:43:20.603242 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a5cf2d-375c-4835-bfe2-b64a05d4bec0-utilities" (OuterVolumeSpecName: "utilities") pod "29a5cf2d-375c-4835-bfe2-b64a05d4bec0" (UID: "29a5cf2d-375c-4835-bfe2-b64a05d4bec0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:43:20 crc kubenswrapper[4844]: I0126 13:43:20.612468 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a5cf2d-375c-4835-bfe2-b64a05d4bec0-kube-api-access-xh5qp" (OuterVolumeSpecName: "kube-api-access-xh5qp") pod "29a5cf2d-375c-4835-bfe2-b64a05d4bec0" (UID: "29a5cf2d-375c-4835-bfe2-b64a05d4bec0"). InnerVolumeSpecName "kube-api-access-xh5qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:43:20 crc kubenswrapper[4844]: I0126 13:43:20.705581 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh5qp\" (UniqueName: \"kubernetes.io/projected/29a5cf2d-375c-4835-bfe2-b64a05d4bec0-kube-api-access-xh5qp\") on node \"crc\" DevicePath \"\"" Jan 26 13:43:20 crc kubenswrapper[4844]: I0126 13:43:20.705653 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a5cf2d-375c-4835-bfe2-b64a05d4bec0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:43:20 crc kubenswrapper[4844]: I0126 13:43:20.803716 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a5cf2d-375c-4835-bfe2-b64a05d4bec0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29a5cf2d-375c-4835-bfe2-b64a05d4bec0" (UID: "29a5cf2d-375c-4835-bfe2-b64a05d4bec0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:43:20 crc kubenswrapper[4844]: I0126 13:43:20.807737 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a5cf2d-375c-4835-bfe2-b64a05d4bec0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:43:20 crc kubenswrapper[4844]: I0126 13:43:20.939831 4844 generic.go:334] "Generic (PLEG): container finished" podID="29a5cf2d-375c-4835-bfe2-b64a05d4bec0" containerID="a197a056b4c24a792dc14ee4bbc23100ddd0b28cf8c79662f7786444c74b9b5e" exitCode=0 Jan 26 13:43:20 crc kubenswrapper[4844]: I0126 13:43:20.939949 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdgzt" event={"ID":"29a5cf2d-375c-4835-bfe2-b64a05d4bec0","Type":"ContainerDied","Data":"a197a056b4c24a792dc14ee4bbc23100ddd0b28cf8c79662f7786444c74b9b5e"} Jan 26 13:43:20 crc kubenswrapper[4844]: I0126 13:43:20.940056 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdgzt" event={"ID":"29a5cf2d-375c-4835-bfe2-b64a05d4bec0","Type":"ContainerDied","Data":"5977f42c7fbcd5de9d69a727e6adf2afd28c20a2cf7284ab53a00e740d1be937"} Jan 26 13:43:20 crc kubenswrapper[4844]: I0126 13:43:20.940007 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdgzt" Jan 26 13:43:20 crc kubenswrapper[4844]: I0126 13:43:20.940090 4844 scope.go:117] "RemoveContainer" containerID="a197a056b4c24a792dc14ee4bbc23100ddd0b28cf8c79662f7786444c74b9b5e" Jan 26 13:43:20 crc kubenswrapper[4844]: I0126 13:43:20.986527 4844 scope.go:117] "RemoveContainer" containerID="162ca06f7ef2051fdc2665285c92be7cd1bda6d93d3e07edc49d0b31de12eb48" Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.001517 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8sgl5"] Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.002480 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8sgl5" podUID="4a77f101-3818-4f36-a8e9-8922afe4219f" containerName="registry-server" containerID="cri-o://613247f1cd3d1b7c10be9d0e8181089cee6d866f61b1bddcfb510209230c9a7b" gracePeriod=2 Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.022676 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kdgzt"] Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.029082 4844 scope.go:117] "RemoveContainer" containerID="9d95431f9ee264953320ce911312608835ab39e7e951ffbc17aea873a5b1edec" Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.033403 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kdgzt"] Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.159701 4844 scope.go:117] "RemoveContainer" containerID="a197a056b4c24a792dc14ee4bbc23100ddd0b28cf8c79662f7786444c74b9b5e" Jan 26 13:43:21 crc kubenswrapper[4844]: E0126 13:43:21.160389 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a197a056b4c24a792dc14ee4bbc23100ddd0b28cf8c79662f7786444c74b9b5e\": container with ID starting with a197a056b4c24a792dc14ee4bbc23100ddd0b28cf8c79662f7786444c74b9b5e not found: ID does not exist" containerID="a197a056b4c24a792dc14ee4bbc23100ddd0b28cf8c79662f7786444c74b9b5e" Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.160424 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a197a056b4c24a792dc14ee4bbc23100ddd0b28cf8c79662f7786444c74b9b5e"} err="failed to get container status \"a197a056b4c24a792dc14ee4bbc23100ddd0b28cf8c79662f7786444c74b9b5e\": rpc error: code = NotFound desc = could not find container \"a197a056b4c24a792dc14ee4bbc23100ddd0b28cf8c79662f7786444c74b9b5e\": container with ID starting with a197a056b4c24a792dc14ee4bbc23100ddd0b28cf8c79662f7786444c74b9b5e not found: ID does not exist" Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.160450 4844 scope.go:117] "RemoveContainer" containerID="162ca06f7ef2051fdc2665285c92be7cd1bda6d93d3e07edc49d0b31de12eb48" Jan 26 13:43:21 crc kubenswrapper[4844]: E0126 13:43:21.160770 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"162ca06f7ef2051fdc2665285c92be7cd1bda6d93d3e07edc49d0b31de12eb48\": container with ID starting with 162ca06f7ef2051fdc2665285c92be7cd1bda6d93d3e07edc49d0b31de12eb48 not found: ID does not exist" containerID="162ca06f7ef2051fdc2665285c92be7cd1bda6d93d3e07edc49d0b31de12eb48" Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.160801 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"162ca06f7ef2051fdc2665285c92be7cd1bda6d93d3e07edc49d0b31de12eb48"} err="failed to get container status \"162ca06f7ef2051fdc2665285c92be7cd1bda6d93d3e07edc49d0b31de12eb48\": rpc error: code = NotFound desc = could not find container \"162ca06f7ef2051fdc2665285c92be7cd1bda6d93d3e07edc49d0b31de12eb48\": container with ID starting with 162ca06f7ef2051fdc2665285c92be7cd1bda6d93d3e07edc49d0b31de12eb48 not found: ID does not exist" Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.160817 4844 scope.go:117] "RemoveContainer" containerID="9d95431f9ee264953320ce911312608835ab39e7e951ffbc17aea873a5b1edec" Jan 26 13:43:21 crc kubenswrapper[4844]: E0126 13:43:21.161222 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d95431f9ee264953320ce911312608835ab39e7e951ffbc17aea873a5b1edec\": container with ID starting with 9d95431f9ee264953320ce911312608835ab39e7e951ffbc17aea873a5b1edec not found: ID does not exist" containerID="9d95431f9ee264953320ce911312608835ab39e7e951ffbc17aea873a5b1edec" Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.161261 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d95431f9ee264953320ce911312608835ab39e7e951ffbc17aea873a5b1edec"} err="failed to get container status \"9d95431f9ee264953320ce911312608835ab39e7e951ffbc17aea873a5b1edec\": rpc error: code = NotFound desc = could not find container \"9d95431f9ee264953320ce911312608835ab39e7e951ffbc17aea873a5b1edec\": container with ID starting with 9d95431f9ee264953320ce911312608835ab39e7e951ffbc17aea873a5b1edec not found: ID does not exist" Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.330100 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a5cf2d-375c-4835-bfe2-b64a05d4bec0" path="/var/lib/kubelet/pods/29a5cf2d-375c-4835-bfe2-b64a05d4bec0/volumes" Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.499153 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8sgl5" Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.623468 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5cnk\" (UniqueName: \"kubernetes.io/projected/4a77f101-3818-4f36-a8e9-8922afe4219f-kube-api-access-f5cnk\") pod \"4a77f101-3818-4f36-a8e9-8922afe4219f\" (UID: \"4a77f101-3818-4f36-a8e9-8922afe4219f\") " Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.623658 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a77f101-3818-4f36-a8e9-8922afe4219f-utilities\") pod \"4a77f101-3818-4f36-a8e9-8922afe4219f\" (UID: \"4a77f101-3818-4f36-a8e9-8922afe4219f\") " Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.623687 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a77f101-3818-4f36-a8e9-8922afe4219f-catalog-content\") pod \"4a77f101-3818-4f36-a8e9-8922afe4219f\" (UID: \"4a77f101-3818-4f36-a8e9-8922afe4219f\") " Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.624879 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a77f101-3818-4f36-a8e9-8922afe4219f-utilities" (OuterVolumeSpecName: "utilities") pod "4a77f101-3818-4f36-a8e9-8922afe4219f" (UID: "4a77f101-3818-4f36-a8e9-8922afe4219f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.629512 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a77f101-3818-4f36-a8e9-8922afe4219f-kube-api-access-f5cnk" (OuterVolumeSpecName: "kube-api-access-f5cnk") pod "4a77f101-3818-4f36-a8e9-8922afe4219f" (UID: "4a77f101-3818-4f36-a8e9-8922afe4219f"). InnerVolumeSpecName "kube-api-access-f5cnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.644718 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a77f101-3818-4f36-a8e9-8922afe4219f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a77f101-3818-4f36-a8e9-8922afe4219f" (UID: "4a77f101-3818-4f36-a8e9-8922afe4219f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.725992 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5cnk\" (UniqueName: \"kubernetes.io/projected/4a77f101-3818-4f36-a8e9-8922afe4219f-kube-api-access-f5cnk\") on node \"crc\" DevicePath \"\"" Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.726027 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a77f101-3818-4f36-a8e9-8922afe4219f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.726036 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a77f101-3818-4f36-a8e9-8922afe4219f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.966477 4844 generic.go:334] "Generic (PLEG): container finished" podID="4a77f101-3818-4f36-a8e9-8922afe4219f" containerID="613247f1cd3d1b7c10be9d0e8181089cee6d866f61b1bddcfb510209230c9a7b" exitCode=0 Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.966522 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8sgl5" event={"ID":"4a77f101-3818-4f36-a8e9-8922afe4219f","Type":"ContainerDied","Data":"613247f1cd3d1b7c10be9d0e8181089cee6d866f61b1bddcfb510209230c9a7b"} Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.966577 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8sgl5" event={"ID":"4a77f101-3818-4f36-a8e9-8922afe4219f","Type":"ContainerDied","Data":"0896e56f0fe6ead52165f63bca2ef2792e5b67ac3843e9a35aae1729cda53add"} Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.966581 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8sgl5" Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.966692 4844 scope.go:117] "RemoveContainer" containerID="613247f1cd3d1b7c10be9d0e8181089cee6d866f61b1bddcfb510209230c9a7b" Jan 26 13:43:21 crc kubenswrapper[4844]: I0126 13:43:21.991507 4844 scope.go:117] "RemoveContainer" containerID="546aca88cb7908ad16d42bb4241b21e93960d9528e80fd24ef3e86db5fa74022" Jan 26 13:43:22 crc kubenswrapper[4844]: I0126 13:43:22.007715 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8sgl5"] Jan 26 13:43:22 crc kubenswrapper[4844]: I0126 13:43:22.016352 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8sgl5"] Jan 26 13:43:22 crc kubenswrapper[4844]: I0126 13:43:22.038548 4844 scope.go:117] "RemoveContainer" containerID="5bf9cfe17ed7f02528289a7a5154ac5039fda74855732c5203ea99cc59373556" Jan 26 13:43:22 crc kubenswrapper[4844]: I0126 13:43:22.086787 4844 scope.go:117] "RemoveContainer" containerID="613247f1cd3d1b7c10be9d0e8181089cee6d866f61b1bddcfb510209230c9a7b" Jan 26 13:43:22 crc kubenswrapper[4844]: E0126 13:43:22.087682 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"613247f1cd3d1b7c10be9d0e8181089cee6d866f61b1bddcfb510209230c9a7b\": container with ID starting with 613247f1cd3d1b7c10be9d0e8181089cee6d866f61b1bddcfb510209230c9a7b not found: ID does not exist" containerID="613247f1cd3d1b7c10be9d0e8181089cee6d866f61b1bddcfb510209230c9a7b" Jan 26 13:43:22 crc kubenswrapper[4844]: I0126 13:43:22.087722 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613247f1cd3d1b7c10be9d0e8181089cee6d866f61b1bddcfb510209230c9a7b"} err="failed to get container status \"613247f1cd3d1b7c10be9d0e8181089cee6d866f61b1bddcfb510209230c9a7b\": rpc error: code = NotFound desc = could not find container \"613247f1cd3d1b7c10be9d0e8181089cee6d866f61b1bddcfb510209230c9a7b\": container with ID starting with 613247f1cd3d1b7c10be9d0e8181089cee6d866f61b1bddcfb510209230c9a7b not found: ID does not exist" Jan 26 13:43:22 crc kubenswrapper[4844]: I0126 13:43:22.087756 4844 scope.go:117] "RemoveContainer" containerID="546aca88cb7908ad16d42bb4241b21e93960d9528e80fd24ef3e86db5fa74022" Jan 26 13:43:22 crc kubenswrapper[4844]: E0126 13:43:22.088279 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"546aca88cb7908ad16d42bb4241b21e93960d9528e80fd24ef3e86db5fa74022\": container with ID starting with 546aca88cb7908ad16d42bb4241b21e93960d9528e80fd24ef3e86db5fa74022 not found: ID does not exist" containerID="546aca88cb7908ad16d42bb4241b21e93960d9528e80fd24ef3e86db5fa74022" Jan 26 13:43:22 crc kubenswrapper[4844]: I0126 13:43:22.088302 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"546aca88cb7908ad16d42bb4241b21e93960d9528e80fd24ef3e86db5fa74022"} err="failed to get container status \"546aca88cb7908ad16d42bb4241b21e93960d9528e80fd24ef3e86db5fa74022\": rpc error: code = NotFound desc = could not find container \"546aca88cb7908ad16d42bb4241b21e93960d9528e80fd24ef3e86db5fa74022\": container with ID starting with 546aca88cb7908ad16d42bb4241b21e93960d9528e80fd24ef3e86db5fa74022 not found: ID does not exist" Jan 26 13:43:22 crc kubenswrapper[4844]: I0126 13:43:22.088321 4844 scope.go:117] "RemoveContainer" containerID="5bf9cfe17ed7f02528289a7a5154ac5039fda74855732c5203ea99cc59373556" Jan 26 13:43:22 crc kubenswrapper[4844]: E0126 13:43:22.088948 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bf9cfe17ed7f02528289a7a5154ac5039fda74855732c5203ea99cc59373556\": container with ID starting with 5bf9cfe17ed7f02528289a7a5154ac5039fda74855732c5203ea99cc59373556 not found: ID does not exist" containerID="5bf9cfe17ed7f02528289a7a5154ac5039fda74855732c5203ea99cc59373556" Jan 26 13:43:22 crc kubenswrapper[4844]: I0126 13:43:22.088991 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bf9cfe17ed7f02528289a7a5154ac5039fda74855732c5203ea99cc59373556"} err="failed to get container status \"5bf9cfe17ed7f02528289a7a5154ac5039fda74855732c5203ea99cc59373556\": rpc error: code = NotFound desc = could not find container \"5bf9cfe17ed7f02528289a7a5154ac5039fda74855732c5203ea99cc59373556\": container with ID starting with 5bf9cfe17ed7f02528289a7a5154ac5039fda74855732c5203ea99cc59373556 not found: ID does not exist" Jan 26 13:43:23 crc kubenswrapper[4844]: I0126 13:43:23.327306 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a77f101-3818-4f36-a8e9-8922afe4219f" path="/var/lib/kubelet/pods/4a77f101-3818-4f36-a8e9-8922afe4219f/volumes" Jan 26 13:43:36 crc kubenswrapper[4844]: I0126 13:43:36.364405 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:43:36 crc kubenswrapper[4844]: I0126 13:43:36.365178 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:44:06 crc kubenswrapper[4844]: I0126 13:44:06.364867 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:44:06 crc kubenswrapper[4844]: I0126 13:44:06.365486 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:44:06 crc kubenswrapper[4844]: I0126 13:44:06.365550 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 13:44:06 crc kubenswrapper[4844]: I0126 13:44:06.366798 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:44:06 crc kubenswrapper[4844]: I0126 13:44:06.366903 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" gracePeriod=600 Jan 26 13:44:06 crc kubenswrapper[4844]: E0126 13:44:06.494987 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:44:06 crc kubenswrapper[4844]: I0126 13:44:06.535228 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" exitCode=0 Jan 26 13:44:06 crc kubenswrapper[4844]: I0126 13:44:06.535333 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0"} Jan 26 13:44:06 crc kubenswrapper[4844]: I0126 13:44:06.535406 4844 scope.go:117] "RemoveContainer" containerID="7f2e320dd842af2d2fc73841752821ae2e65a052ea9aa96d77b135f1559a71cd" Jan 26 13:44:06 crc kubenswrapper[4844]: I0126 13:44:06.536709 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:44:06 crc kubenswrapper[4844]: E0126 13:44:06.537100 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:44:20 crc kubenswrapper[4844]: I0126 13:44:20.313544 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:44:20 crc kubenswrapper[4844]: E0126 13:44:20.314533 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:44:31 crc kubenswrapper[4844]: I0126 13:44:31.313726 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:44:31 crc kubenswrapper[4844]: E0126 13:44:31.314504 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:44:38 crc kubenswrapper[4844]: I0126 13:44:38.186725 4844 scope.go:117] "RemoveContainer" containerID="c90d139d708527958fd389f98ea017323f3f477e395eaebd3baefd4ad9ad6156" Jan 26 13:44:38 crc kubenswrapper[4844]: I0126 13:44:38.232812 4844 scope.go:117] "RemoveContainer" containerID="bf1c5d48a2a33b5b04f501e375f95797c316b1c8a0b604d586e79b1434423a75" Jan 26 13:44:38 crc kubenswrapper[4844]: I0126 13:44:38.289992 4844 scope.go:117] "RemoveContainer" containerID="06fe033a6238c7291524d3b52aa2ce20ce3dc01356c218042f83db21ff48bca4" Jan 26 13:44:44 crc kubenswrapper[4844]: I0126 13:44:44.314777 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:44:44 crc kubenswrapper[4844]: E0126 13:44:44.316077 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:44:58 crc kubenswrapper[4844]: I0126 13:44:58.313525 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:44:58 crc kubenswrapper[4844]: E0126 13:44:58.314548 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.161433 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz"] Jan 26 13:45:00 crc kubenswrapper[4844]: E0126 13:45:00.162209 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a5cf2d-375c-4835-bfe2-b64a05d4bec0" containerName="registry-server" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.162223 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a5cf2d-375c-4835-bfe2-b64a05d4bec0" containerName="registry-server" Jan 26 13:45:00 crc kubenswrapper[4844]: E0126 13:45:00.162282 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a5cf2d-375c-4835-bfe2-b64a05d4bec0" containerName="extract-utilities" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.162291 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a5cf2d-375c-4835-bfe2-b64a05d4bec0" containerName="extract-utilities" Jan 26 13:45:00 crc kubenswrapper[4844]: E0126 13:45:00.162311 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a77f101-3818-4f36-a8e9-8922afe4219f" containerName="registry-server" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.162319 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a77f101-3818-4f36-a8e9-8922afe4219f" containerName="registry-server" Jan 26 13:45:00 crc kubenswrapper[4844]: E0126 13:45:00.162330 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a77f101-3818-4f36-a8e9-8922afe4219f" containerName="extract-utilities" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.162337 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a77f101-3818-4f36-a8e9-8922afe4219f" containerName="extract-utilities" Jan 26 13:45:00 crc kubenswrapper[4844]: E0126 13:45:00.162351 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a5cf2d-375c-4835-bfe2-b64a05d4bec0" containerName="extract-content" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.162359 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a5cf2d-375c-4835-bfe2-b64a05d4bec0" containerName="extract-content" Jan 26 13:45:00 crc kubenswrapper[4844]: E0126 13:45:00.162369 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a77f101-3818-4f36-a8e9-8922afe4219f" containerName="extract-content" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.162377 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a77f101-3818-4f36-a8e9-8922afe4219f" containerName="extract-content" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.162674 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a5cf2d-375c-4835-bfe2-b64a05d4bec0" containerName="registry-server" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.162700 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a77f101-3818-4f36-a8e9-8922afe4219f" containerName="registry-server" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.163538 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.165805 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.166156 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.176868 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz"] Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.332856 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adf537bf-b6e3-434a-9974-0bdb96ad52ca-secret-volume\") pod \"collect-profiles-29490585-c9xnz\" (UID: \"adf537bf-b6e3-434a-9974-0bdb96ad52ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.333194 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9p2j\" (UniqueName: \"kubernetes.io/projected/adf537bf-b6e3-434a-9974-0bdb96ad52ca-kube-api-access-w9p2j\") pod \"collect-profiles-29490585-c9xnz\" (UID: \"adf537bf-b6e3-434a-9974-0bdb96ad52ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.333235 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adf537bf-b6e3-434a-9974-0bdb96ad52ca-config-volume\") pod \"collect-profiles-29490585-c9xnz\" (UID: \"adf537bf-b6e3-434a-9974-0bdb96ad52ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.434806 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adf537bf-b6e3-434a-9974-0bdb96ad52ca-secret-volume\") pod \"collect-profiles-29490585-c9xnz\" (UID: \"adf537bf-b6e3-434a-9974-0bdb96ad52ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.434924 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9p2j\" (UniqueName: \"kubernetes.io/projected/adf537bf-b6e3-434a-9974-0bdb96ad52ca-kube-api-access-w9p2j\") pod \"collect-profiles-29490585-c9xnz\" (UID: \"adf537bf-b6e3-434a-9974-0bdb96ad52ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.434942 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adf537bf-b6e3-434a-9974-0bdb96ad52ca-config-volume\") pod \"collect-profiles-29490585-c9xnz\" (UID: \"adf537bf-b6e3-434a-9974-0bdb96ad52ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.436158 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adf537bf-b6e3-434a-9974-0bdb96ad52ca-config-volume\") pod \"collect-profiles-29490585-c9xnz\" (UID: \"adf537bf-b6e3-434a-9974-0bdb96ad52ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.440746 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adf537bf-b6e3-434a-9974-0bdb96ad52ca-secret-volume\") pod \"collect-profiles-29490585-c9xnz\" (UID: \"adf537bf-b6e3-434a-9974-0bdb96ad52ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.455675 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9p2j\" (UniqueName: \"kubernetes.io/projected/adf537bf-b6e3-434a-9974-0bdb96ad52ca-kube-api-access-w9p2j\") pod \"collect-profiles-29490585-c9xnz\" (UID: \"adf537bf-b6e3-434a-9974-0bdb96ad52ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.489451 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz" Jan 26 13:45:00 crc kubenswrapper[4844]: I0126 13:45:00.933314 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz"] Jan 26 13:45:01 crc kubenswrapper[4844]: I0126 13:45:01.094647 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz" event={"ID":"adf537bf-b6e3-434a-9974-0bdb96ad52ca","Type":"ContainerStarted","Data":"f6a0d362ef8366c9fd842cb8229d46debc7788b95a095c4bc6e942295228e454"} Jan 26 13:45:02 crc kubenswrapper[4844]: I0126 13:45:02.109429 4844 generic.go:334] "Generic (PLEG): container finished" podID="adf537bf-b6e3-434a-9974-0bdb96ad52ca" containerID="6703894b7ec317767939fa078e9e3a23439fc711550592254bb80e1104d38d36" exitCode=0 Jan 26 13:45:02 crc kubenswrapper[4844]: I0126 13:45:02.109516 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz" event={"ID":"adf537bf-b6e3-434a-9974-0bdb96ad52ca","Type":"ContainerDied","Data":"6703894b7ec317767939fa078e9e3a23439fc711550592254bb80e1104d38d36"} Jan 26 13:45:03 crc kubenswrapper[4844]: I0126 13:45:03.513394 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz" Jan 26 13:45:03 crc kubenswrapper[4844]: I0126 13:45:03.702669 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adf537bf-b6e3-434a-9974-0bdb96ad52ca-secret-volume\") pod \"adf537bf-b6e3-434a-9974-0bdb96ad52ca\" (UID: \"adf537bf-b6e3-434a-9974-0bdb96ad52ca\") " Jan 26 13:45:03 crc kubenswrapper[4844]: I0126 13:45:03.702757 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9p2j\" (UniqueName: \"kubernetes.io/projected/adf537bf-b6e3-434a-9974-0bdb96ad52ca-kube-api-access-w9p2j\") pod \"adf537bf-b6e3-434a-9974-0bdb96ad52ca\" (UID: \"adf537bf-b6e3-434a-9974-0bdb96ad52ca\") " Jan 26 13:45:03 crc kubenswrapper[4844]: I0126 13:45:03.702790 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adf537bf-b6e3-434a-9974-0bdb96ad52ca-config-volume\") pod \"adf537bf-b6e3-434a-9974-0bdb96ad52ca\" (UID: \"adf537bf-b6e3-434a-9974-0bdb96ad52ca\") " Jan 26 13:45:03 crc kubenswrapper[4844]: I0126 13:45:03.703809 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adf537bf-b6e3-434a-9974-0bdb96ad52ca-config-volume" (OuterVolumeSpecName: "config-volume") pod "adf537bf-b6e3-434a-9974-0bdb96ad52ca" (UID: "adf537bf-b6e3-434a-9974-0bdb96ad52ca"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:45:03 crc kubenswrapper[4844]: I0126 13:45:03.706539 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf537bf-b6e3-434a-9974-0bdb96ad52ca-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "adf537bf-b6e3-434a-9974-0bdb96ad52ca" (UID: "adf537bf-b6e3-434a-9974-0bdb96ad52ca"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:45:03 crc kubenswrapper[4844]: I0126 13:45:03.707654 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf537bf-b6e3-434a-9974-0bdb96ad52ca-kube-api-access-w9p2j" (OuterVolumeSpecName: "kube-api-access-w9p2j") pod "adf537bf-b6e3-434a-9974-0bdb96ad52ca" (UID: "adf537bf-b6e3-434a-9974-0bdb96ad52ca"). InnerVolumeSpecName "kube-api-access-w9p2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:45:03 crc kubenswrapper[4844]: I0126 13:45:03.804223 4844 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adf537bf-b6e3-434a-9974-0bdb96ad52ca-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 13:45:03 crc kubenswrapper[4844]: I0126 13:45:03.804263 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9p2j\" (UniqueName: \"kubernetes.io/projected/adf537bf-b6e3-434a-9974-0bdb96ad52ca-kube-api-access-w9p2j\") on node \"crc\" DevicePath \"\"" Jan 26 13:45:03 crc kubenswrapper[4844]: I0126 13:45:03.804272 4844 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adf537bf-b6e3-434a-9974-0bdb96ad52ca-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 13:45:04 crc kubenswrapper[4844]: I0126 13:45:04.134099 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz" event={"ID":"adf537bf-b6e3-434a-9974-0bdb96ad52ca","Type":"ContainerDied","Data":"f6a0d362ef8366c9fd842cb8229d46debc7788b95a095c4bc6e942295228e454"} Jan 26 13:45:04 crc kubenswrapper[4844]: I0126 13:45:04.134132 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6a0d362ef8366c9fd842cb8229d46debc7788b95a095c4bc6e942295228e454" Jan 26 13:45:04 crc kubenswrapper[4844]: I0126 13:45:04.134280 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz" Jan 26 13:45:04 crc kubenswrapper[4844]: I0126 13:45:04.591687 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g"] Jan 26 13:45:04 crc kubenswrapper[4844]: I0126 13:45:04.600372 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490540-qvh5g"] Jan 26 13:45:05 crc kubenswrapper[4844]: I0126 13:45:05.331165 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="632a6099-975b-4832-8c3a-d0dbd49c482f" path="/var/lib/kubelet/pods/632a6099-975b-4832-8c3a-d0dbd49c482f/volumes" Jan 26 13:45:09 crc kubenswrapper[4844]: I0126 13:45:09.182021 4844 generic.go:334] "Generic (PLEG): container finished" podID="421111b7-6358-404a-b57f-b6529eb910f9" containerID="e8c9a6ea50a57ea40fc6569fd5ad7cf3957962addebff6b9cb7f5235df7d8223" exitCode=0 Jan 26 13:45:09 crc kubenswrapper[4844]: I0126 13:45:09.182132 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" event={"ID":"421111b7-6358-404a-b57f-b6529eb910f9","Type":"ContainerDied","Data":"e8c9a6ea50a57ea40fc6569fd5ad7cf3957962addebff6b9cb7f5235df7d8223"} Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.647592 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.744942 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-cell1-compute-config-1\") pod \"421111b7-6358-404a-b57f-b6529eb910f9\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.745019 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-cell1-compute-config-0\") pod \"421111b7-6358-404a-b57f-b6529eb910f9\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.745051 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-inventory\") pod \"421111b7-6358-404a-b57f-b6529eb910f9\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.745201 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-ssh-key-openstack-edpm-ipam\") pod \"421111b7-6358-404a-b57f-b6529eb910f9\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.745237 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-migration-ssh-key-1\") pod \"421111b7-6358-404a-b57f-b6529eb910f9\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.745286 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-combined-ca-bundle\") pod \"421111b7-6358-404a-b57f-b6529eb910f9\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.745365 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l65dc\" (UniqueName: \"kubernetes.io/projected/421111b7-6358-404a-b57f-b6529eb910f9-kube-api-access-l65dc\") pod \"421111b7-6358-404a-b57f-b6529eb910f9\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.745489 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/421111b7-6358-404a-b57f-b6529eb910f9-nova-extra-config-0\") pod \"421111b7-6358-404a-b57f-b6529eb910f9\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.745562 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-migration-ssh-key-0\") pod \"421111b7-6358-404a-b57f-b6529eb910f9\" (UID: \"421111b7-6358-404a-b57f-b6529eb910f9\") " Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.751694 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/421111b7-6358-404a-b57f-b6529eb910f9-kube-api-access-l65dc" (OuterVolumeSpecName: "kube-api-access-l65dc") pod "421111b7-6358-404a-b57f-b6529eb910f9" (UID: "421111b7-6358-404a-b57f-b6529eb910f9"). InnerVolumeSpecName "kube-api-access-l65dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.764584 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "421111b7-6358-404a-b57f-b6529eb910f9" (UID: "421111b7-6358-404a-b57f-b6529eb910f9"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.770957 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/421111b7-6358-404a-b57f-b6529eb910f9-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "421111b7-6358-404a-b57f-b6529eb910f9" (UID: "421111b7-6358-404a-b57f-b6529eb910f9"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.780016 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "421111b7-6358-404a-b57f-b6529eb910f9" (UID: "421111b7-6358-404a-b57f-b6529eb910f9"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.783904 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-inventory" (OuterVolumeSpecName: "inventory") pod "421111b7-6358-404a-b57f-b6529eb910f9" (UID: "421111b7-6358-404a-b57f-b6529eb910f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.784861 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "421111b7-6358-404a-b57f-b6529eb910f9" (UID: "421111b7-6358-404a-b57f-b6529eb910f9"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.787041 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "421111b7-6358-404a-b57f-b6529eb910f9" (UID: "421111b7-6358-404a-b57f-b6529eb910f9"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.788643 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "421111b7-6358-404a-b57f-b6529eb910f9" (UID: "421111b7-6358-404a-b57f-b6529eb910f9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.800233 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "421111b7-6358-404a-b57f-b6529eb910f9" (UID: "421111b7-6358-404a-b57f-b6529eb910f9"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.848515 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l65dc\" (UniqueName: \"kubernetes.io/projected/421111b7-6358-404a-b57f-b6529eb910f9-kube-api-access-l65dc\") on node \"crc\" DevicePath \"\"" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.848551 4844 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/421111b7-6358-404a-b57f-b6529eb910f9-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.848569 4844 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.848589 4844 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.848634 4844 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.848651 4844 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.848664 4844 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.848675 4844 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:10.848687 4844 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421111b7-6358-404a-b57f-b6529eb910f9-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.206312 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" event={"ID":"421111b7-6358-404a-b57f-b6529eb910f9","Type":"ContainerDied","Data":"ba68a299af2a26925d82f5647ff972056ee04860104e1ed9d8cbafd2c110499d"} Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.206724 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba68a299af2a26925d82f5647ff972056ee04860104e1ed9d8cbafd2c110499d" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.206504 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2xrbw" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.315201 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:45:11 crc kubenswrapper[4844]: E0126 13:45:11.315746 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.530731 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd"] Jan 26 13:45:11 crc kubenswrapper[4844]: E0126 13:45:11.531405 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="421111b7-6358-404a-b57f-b6529eb910f9" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.531436 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="421111b7-6358-404a-b57f-b6529eb910f9" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 13:45:11 crc kubenswrapper[4844]: E0126 13:45:11.531481 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf537bf-b6e3-434a-9974-0bdb96ad52ca" containerName="collect-profiles" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.531495 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf537bf-b6e3-434a-9974-0bdb96ad52ca" containerName="collect-profiles" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.531895 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="421111b7-6358-404a-b57f-b6529eb910f9" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.531955 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf537bf-b6e3-434a-9974-0bdb96ad52ca" containerName="collect-profiles" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.533161 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.535842 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r4j2z" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.536132 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.536135 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.537109 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.542060 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.548492 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd"] Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.676799 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.676978 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.677187 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.677307 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.677444 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.677583 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mghdg\" (UniqueName: \"kubernetes.io/projected/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-kube-api-access-mghdg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.677876 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.779457 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.779582 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.779675 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.779780 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.779826 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.780060 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.780132 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mghdg\" (UniqueName: \"kubernetes.io/projected/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-kube-api-access-mghdg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.783491 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.785214 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.785481 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.786116 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.791477 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.792194 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.810632 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mghdg\" (UniqueName: \"kubernetes.io/projected/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-kube-api-access-mghdg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:11 crc kubenswrapper[4844]: I0126 13:45:11.863626 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:45:12 crc kubenswrapper[4844]: I0126 13:45:12.268251 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd"] Jan 26 13:45:13 crc kubenswrapper[4844]: I0126 13:45:13.229021 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" event={"ID":"28d2f4e7-9d62-41ba-88db-fc0591ec6d43","Type":"ContainerStarted","Data":"06ca27d13255cd038b63d91df75c1292ae27c7aa6d8ac6d45f9bc7381f7910d9"} Jan 26 13:45:13 crc kubenswrapper[4844]: I0126 13:45:13.229335 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" event={"ID":"28d2f4e7-9d62-41ba-88db-fc0591ec6d43","Type":"ContainerStarted","Data":"9b5d003c02d5f11e531c8e019b0cfbd0961b1829f9aee5a2cdfdfe45c2bb4bb9"} Jan 26 13:45:13 crc kubenswrapper[4844]: I0126 13:45:13.257693 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" podStartSLOduration=1.7294161780000001 podStartE2EDuration="2.257671341s" podCreationTimestamp="2026-01-26 13:45:11 +0000 UTC" firstStartedPulling="2026-01-26 13:45:12.258104248 +0000 UTC m=+3689.191471870" lastFinishedPulling="2026-01-26 13:45:12.786359381 +0000 UTC m=+3689.719727033" observedRunningTime="2026-01-26 13:45:13.247578756 +0000 UTC m=+3690.180946368" watchObservedRunningTime="2026-01-26 13:45:13.257671341 +0000 UTC m=+3690.191038963" Jan 26 13:45:23 crc kubenswrapper[4844]: I0126 13:45:23.318637 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:45:23 crc kubenswrapper[4844]: E0126 13:45:23.320375 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:45:35 crc kubenswrapper[4844]: I0126 13:45:35.313932 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:45:35 crc kubenswrapper[4844]: E0126 13:45:35.314963 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:45:38 crc kubenswrapper[4844]: I0126 13:45:38.372153 4844 scope.go:117] "RemoveContainer" containerID="8d2ec9a1ea23de88c7bb56a717a32f52d3430ea03c06d1e640422b042f5e7dcb" Jan 26 13:45:50 crc kubenswrapper[4844]: I0126 13:45:50.313954 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:45:50 crc kubenswrapper[4844]: E0126 13:45:50.315295 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:46:02 crc kubenswrapper[4844]: I0126 13:46:02.313860 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:46:02 crc kubenswrapper[4844]: E0126 13:46:02.314537 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:46:17 crc kubenswrapper[4844]: I0126 13:46:17.313292 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:46:17 crc kubenswrapper[4844]: E0126 13:46:17.314125 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:46:32 crc kubenswrapper[4844]: I0126 13:46:32.313323 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:46:32 crc kubenswrapper[4844]: E0126 13:46:32.314448 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:46:45 crc kubenswrapper[4844]: I0126 13:46:45.318581 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:46:45 crc kubenswrapper[4844]: E0126 13:46:45.319415 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:46:59 crc kubenswrapper[4844]: I0126 13:46:59.314102 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:46:59 crc kubenswrapper[4844]: E0126 13:46:59.315396 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:47:10 crc kubenswrapper[4844]: I0126 13:47:10.313672 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:47:10 crc kubenswrapper[4844]: E0126 13:47:10.315550 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:47:23 crc kubenswrapper[4844]: I0126 13:47:23.319012 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:47:23 crc kubenswrapper[4844]: E0126 13:47:23.319613 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:47:37 crc kubenswrapper[4844]: I0126 13:47:37.314464 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:47:37 crc kubenswrapper[4844]: E0126 13:47:37.315690 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:47:40 crc kubenswrapper[4844]: I0126 13:47:40.897080 4844 generic.go:334] "Generic (PLEG): container finished" podID="28d2f4e7-9d62-41ba-88db-fc0591ec6d43" containerID="06ca27d13255cd038b63d91df75c1292ae27c7aa6d8ac6d45f9bc7381f7910d9" exitCode=0 Jan 26 13:47:40 crc kubenswrapper[4844]: I0126 13:47:40.897189 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" event={"ID":"28d2f4e7-9d62-41ba-88db-fc0591ec6d43","Type":"ContainerDied","Data":"06ca27d13255cd038b63d91df75c1292ae27c7aa6d8ac6d45f9bc7381f7910d9"} Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.341773 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.468984 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-telemetry-combined-ca-bundle\") pod \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.469447 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ssh-key-openstack-edpm-ipam\") pod \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.469564 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ceilometer-compute-config-data-0\") pod \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.469654 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ceilometer-compute-config-data-2\") pod \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.469696 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-inventory\") pod \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.469761 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mghdg\" (UniqueName: \"kubernetes.io/projected/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-kube-api-access-mghdg\") pod \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.469801 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ceilometer-compute-config-data-1\") pod \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\" (UID: \"28d2f4e7-9d62-41ba-88db-fc0591ec6d43\") " Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.480996 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-kube-api-access-mghdg" (OuterVolumeSpecName: "kube-api-access-mghdg") pod "28d2f4e7-9d62-41ba-88db-fc0591ec6d43" (UID: "28d2f4e7-9d62-41ba-88db-fc0591ec6d43"). InnerVolumeSpecName "kube-api-access-mghdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.482712 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "28d2f4e7-9d62-41ba-88db-fc0591ec6d43" (UID: "28d2f4e7-9d62-41ba-88db-fc0591ec6d43"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.500569 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "28d2f4e7-9d62-41ba-88db-fc0591ec6d43" (UID: "28d2f4e7-9d62-41ba-88db-fc0591ec6d43"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.515103 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-inventory" (OuterVolumeSpecName: "inventory") pod "28d2f4e7-9d62-41ba-88db-fc0591ec6d43" (UID: "28d2f4e7-9d62-41ba-88db-fc0591ec6d43"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.517925 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "28d2f4e7-9d62-41ba-88db-fc0591ec6d43" (UID: "28d2f4e7-9d62-41ba-88db-fc0591ec6d43"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.523975 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "28d2f4e7-9d62-41ba-88db-fc0591ec6d43" (UID: "28d2f4e7-9d62-41ba-88db-fc0591ec6d43"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.533068 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "28d2f4e7-9d62-41ba-88db-fc0591ec6d43" (UID: "28d2f4e7-9d62-41ba-88db-fc0591ec6d43"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.572980 4844 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.573033 4844 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.573054 4844 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.573074 4844 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.573097 4844 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.573114 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mghdg\" (UniqueName: \"kubernetes.io/projected/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-kube-api-access-mghdg\") on node \"crc\" DevicePath \"\"" Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.573132 4844 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/28d2f4e7-9d62-41ba-88db-fc0591ec6d43-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.920970 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" event={"ID":"28d2f4e7-9d62-41ba-88db-fc0591ec6d43","Type":"ContainerDied","Data":"9b5d003c02d5f11e531c8e019b0cfbd0961b1829f9aee5a2cdfdfe45c2bb4bb9"} Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.921052 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b5d003c02d5f11e531c8e019b0cfbd0961b1829f9aee5a2cdfdfe45c2bb4bb9" Jan 26 13:47:42 crc kubenswrapper[4844]: I0126 13:47:42.921067 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd" Jan 26 13:47:50 crc kubenswrapper[4844]: I0126 13:47:50.312972 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:47:50 crc kubenswrapper[4844]: E0126 13:47:50.314948 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:48:01 crc kubenswrapper[4844]: I0126 13:48:01.313179 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:48:01 crc kubenswrapper[4844]: E0126 13:48:01.314084 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:48:12 crc kubenswrapper[4844]: I0126 13:48:12.313934 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:48:12 crc kubenswrapper[4844]: E0126 13:48:12.315483 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:48:19 crc kubenswrapper[4844]: I0126 13:48:19.987237 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 26 13:48:19 crc kubenswrapper[4844]: E0126 13:48:19.988468 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d2f4e7-9d62-41ba-88db-fc0591ec6d43" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 13:48:19 crc kubenswrapper[4844]: I0126 13:48:19.988491 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d2f4e7-9d62-41ba-88db-fc0591ec6d43" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 13:48:19 crc kubenswrapper[4844]: I0126 13:48:19.988855 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d2f4e7-9d62-41ba-88db-fc0591ec6d43" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 13:48:19 crc kubenswrapper[4844]: I0126 13:48:19.993075 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.002843 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.007533 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.064535 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.066556 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.068570 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.073587 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.119770 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40715f48-d3b7-4cca-9f3d-cba20a94ed39-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.119817 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.119847 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-sys\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.119873 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-run\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.119893 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.119919 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2da46443-17b2-425a-ad97-c2dcae16074b-config-data\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.119937 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-852bp\" (UniqueName: \"kubernetes.io/projected/40715f48-d3b7-4cca-9f3d-cba20a94ed39-kube-api-access-852bp\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.119954 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40715f48-d3b7-4cca-9f3d-cba20a94ed39-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.119977 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-lib-modules\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.119999 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120021 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40715f48-d3b7-4cca-9f3d-cba20a94ed39-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120044 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120061 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-dev\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120079 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-dev\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120106 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120122 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-run\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120138 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120155 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40715f48-d3b7-4cca-9f3d-cba20a94ed39-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120181 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120209 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2da46443-17b2-425a-ad97-c2dcae16074b-scripts\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120224 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120241 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-etc-nvme\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120261 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120275 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2da46443-17b2-425a-ad97-c2dcae16074b-config-data-custom\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120293 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120313 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-sys\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120333 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da46443-17b2-425a-ad97-c2dcae16074b-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120350 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120370 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.120391 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw847\" (UniqueName: \"kubernetes.io/projected/2da46443-17b2-425a-ad97-c2dcae16074b-kube-api-access-pw847\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.137971 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.140533 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.142256 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.155764 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222183 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222233 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-run\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222260 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eacc0803-a775-4eb4-8f3a-a126716ddbb5-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222276 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222287 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222297 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40715f48-d3b7-4cca-9f3d-cba20a94ed39-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222341 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-run\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222563 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222634 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222693 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwr2k\" (UniqueName: \"kubernetes.io/projected/eacc0803-a775-4eb4-8f3a-a126716ddbb5-kube-api-access-nwr2k\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222724 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222741 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222802 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222826 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222861 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2da46443-17b2-425a-ad97-c2dcae16074b-scripts\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222880 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222910 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-etc-nvme\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222934 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222950 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2da46443-17b2-425a-ad97-c2dcae16074b-config-data-custom\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222982 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.222999 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.223017 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.223054 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-etc-nvme\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.223161 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.223169 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.223197 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.223343 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-sys\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.223240 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-sys\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.223576 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da46443-17b2-425a-ad97-c2dcae16074b-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.223622 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.223658 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.223701 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.223719 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw847\" (UniqueName: \"kubernetes.io/projected/2da46443-17b2-425a-ad97-c2dcae16074b-kube-api-access-pw847\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.223750 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.223701 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.223843 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40715f48-d3b7-4cca-9f3d-cba20a94ed39-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.223880 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.223990 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-sys\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224023 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-run\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224040 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eacc0803-a775-4eb4-8f3a-a126716ddbb5-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224052 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-sys\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224058 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eacc0803-a775-4eb4-8f3a-a126716ddbb5-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.223927 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224086 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-run\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224086 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224119 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224185 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224201 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224285 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2da46443-17b2-425a-ad97-c2dcae16074b-config-data\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224303 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-852bp\" (UniqueName: \"kubernetes.io/projected/40715f48-d3b7-4cca-9f3d-cba20a94ed39-kube-api-access-852bp\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224329 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40715f48-d3b7-4cca-9f3d-cba20a94ed39-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224342 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224368 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-lib-modules\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224396 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eacc0803-a775-4eb4-8f3a-a126716ddbb5-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224425 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224442 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224482 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40715f48-d3b7-4cca-9f3d-cba20a94ed39-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224521 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-lib-modules\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224525 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224671 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224700 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-dev\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224722 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-dev\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224796 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224821 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2da46443-17b2-425a-ad97-c2dcae16074b-dev\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.224891 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/40715f48-d3b7-4cca-9f3d-cba20a94ed39-dev\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.228286 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40715f48-d3b7-4cca-9f3d-cba20a94ed39-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.228814 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2da46443-17b2-425a-ad97-c2dcae16074b-config-data\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.229490 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da46443-17b2-425a-ad97-c2dcae16074b-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.230500 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40715f48-d3b7-4cca-9f3d-cba20a94ed39-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.231168 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40715f48-d3b7-4cca-9f3d-cba20a94ed39-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.231233 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2da46443-17b2-425a-ad97-c2dcae16074b-config-data-custom\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.232260 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2da46443-17b2-425a-ad97-c2dcae16074b-scripts\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.237099 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40715f48-d3b7-4cca-9f3d-cba20a94ed39-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.241243 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw847\" (UniqueName: \"kubernetes.io/projected/2da46443-17b2-425a-ad97-c2dcae16074b-kube-api-access-pw847\") pod \"cinder-backup-0\" (UID: \"2da46443-17b2-425a-ad97-c2dcae16074b\") " pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.241386 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-852bp\" (UniqueName: \"kubernetes.io/projected/40715f48-d3b7-4cca-9f3d-cba20a94ed39-kube-api-access-852bp\") pod \"cinder-volume-nfs-0\" (UID: \"40715f48-d3b7-4cca-9f3d-cba20a94ed39\") " pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.321099 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.326587 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.326673 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.326665 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.326722 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.326745 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.326746 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.326856 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.326801 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.326817 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.327145 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eacc0803-a775-4eb4-8f3a-a126716ddbb5-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.327203 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eacc0803-a775-4eb4-8f3a-a126716ddbb5-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.327276 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.327308 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.327400 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.327457 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eacc0803-a775-4eb4-8f3a-a126716ddbb5-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.327512 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.327751 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eacc0803-a775-4eb4-8f3a-a126716ddbb5-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.327839 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.327891 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwr2k\" (UniqueName: \"kubernetes.io/projected/eacc0803-a775-4eb4-8f3a-a126716ddbb5-kube-api-access-nwr2k\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.326875 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.329136 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.329230 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.329248 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.329332 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.329348 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/eacc0803-a775-4eb4-8f3a-a126716ddbb5-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.332794 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eacc0803-a775-4eb4-8f3a-a126716ddbb5-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.333034 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eacc0803-a775-4eb4-8f3a-a126716ddbb5-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.336114 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eacc0803-a775-4eb4-8f3a-a126716ddbb5-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.338292 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eacc0803-a775-4eb4-8f3a-a126716ddbb5-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.360851 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwr2k\" (UniqueName: \"kubernetes.io/projected/eacc0803-a775-4eb4-8f3a-a126716ddbb5-kube-api-access-nwr2k\") pod \"cinder-volume-nfs-2-0\" (UID: \"eacc0803-a775-4eb4-8f3a-a126716ddbb5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.393190 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:20 crc kubenswrapper[4844]: I0126 13:48:20.463961 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:21 crc kubenswrapper[4844]: I0126 13:48:21.133362 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 26 13:48:21 crc kubenswrapper[4844]: I0126 13:48:21.141508 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 13:48:21 crc kubenswrapper[4844]: I0126 13:48:21.251439 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 26 13:48:21 crc kubenswrapper[4844]: W0126 13:48:21.342084 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40715f48_d3b7_4cca_9f3d_cba20a94ed39.slice/crio-cda516b18ccb828f760aeff7c762c3717ada2625bb97a30bb0bfb0c90c4647f0 WatchSource:0}: Error finding container cda516b18ccb828f760aeff7c762c3717ada2625bb97a30bb0bfb0c90c4647f0: Status 404 returned error can't find the container with id cda516b18ccb828f760aeff7c762c3717ada2625bb97a30bb0bfb0c90c4647f0 Jan 26 13:48:21 crc kubenswrapper[4844]: I0126 13:48:21.357871 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 26 13:48:21 crc kubenswrapper[4844]: W0126 13:48:21.394694 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeacc0803_a775_4eb4_8f3a_a126716ddbb5.slice/crio-c89e8c8832d9ab627e76da804ea2843aad7f70f21352b13368db8c6711d58d9f WatchSource:0}: Error finding container c89e8c8832d9ab627e76da804ea2843aad7f70f21352b13368db8c6711d58d9f: Status 404 returned error can't find the container with id c89e8c8832d9ab627e76da804ea2843aad7f70f21352b13368db8c6711d58d9f Jan 26 13:48:21 crc kubenswrapper[4844]: I0126 13:48:21.415479 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"2da46443-17b2-425a-ad97-c2dcae16074b","Type":"ContainerStarted","Data":"0804a9a6b59046b432344b4dc63d4b25942ff1677ed2fa3d3c10b34c1d115036"} Jan 26 13:48:21 crc kubenswrapper[4844]: I0126 13:48:21.417391 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"40715f48-d3b7-4cca-9f3d-cba20a94ed39","Type":"ContainerStarted","Data":"cda516b18ccb828f760aeff7c762c3717ada2625bb97a30bb0bfb0c90c4647f0"} Jan 26 13:48:21 crc kubenswrapper[4844]: I0126 13:48:21.418883 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"eacc0803-a775-4eb4-8f3a-a126716ddbb5","Type":"ContainerStarted","Data":"c89e8c8832d9ab627e76da804ea2843aad7f70f21352b13368db8c6711d58d9f"} Jan 26 13:48:22 crc kubenswrapper[4844]: I0126 13:48:22.433757 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"2da46443-17b2-425a-ad97-c2dcae16074b","Type":"ContainerStarted","Data":"6a2ef5053d214e821b04f97a2b91b542bc3b74559212cbb943d5eea567e310ec"} Jan 26 13:48:22 crc kubenswrapper[4844]: I0126 13:48:22.434019 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"2da46443-17b2-425a-ad97-c2dcae16074b","Type":"ContainerStarted","Data":"a6ebe855486f6c44efd6a6a8e844e85994b957bce692b6c3b83d7c851fc08eac"} Jan 26 13:48:22 crc kubenswrapper[4844]: I0126 13:48:22.436191 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"40715f48-d3b7-4cca-9f3d-cba20a94ed39","Type":"ContainerStarted","Data":"1e31cae185d11fe055eb97d14f77a1b4840ecc1e8ff5d13f33f59ef6776884ac"} Jan 26 13:48:22 crc kubenswrapper[4844]: I0126 13:48:22.436280 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"40715f48-d3b7-4cca-9f3d-cba20a94ed39","Type":"ContainerStarted","Data":"fe1875e9ca81ee5c35031ea6349937aaaf053cdb4aefe4e2e6981ecaece6a114"} Jan 26 13:48:22 crc kubenswrapper[4844]: I0126 13:48:22.441774 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"eacc0803-a775-4eb4-8f3a-a126716ddbb5","Type":"ContainerStarted","Data":"5842fd1999628f1504a7557ebbc4d4922e6948f7b48bef7f3790e422a9531ee3"} Jan 26 13:48:22 crc kubenswrapper[4844]: I0126 13:48:22.441814 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"eacc0803-a775-4eb4-8f3a-a126716ddbb5","Type":"ContainerStarted","Data":"e59403d4fbd3c0210e92d2cb51cad9c1f7fd187b29e59f107613ffc6359336ca"} Jan 26 13:48:22 crc kubenswrapper[4844]: I0126 13:48:22.467411 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.209625776 podStartE2EDuration="3.467389798s" podCreationTimestamp="2026-01-26 13:48:19 +0000 UTC" firstStartedPulling="2026-01-26 13:48:21.141292373 +0000 UTC m=+3878.074659985" lastFinishedPulling="2026-01-26 13:48:21.399056395 +0000 UTC m=+3878.332424007" observedRunningTime="2026-01-26 13:48:22.45797883 +0000 UTC m=+3879.391346462" watchObservedRunningTime="2026-01-26 13:48:22.467389798 +0000 UTC m=+3879.400757420" Jan 26 13:48:22 crc kubenswrapper[4844]: I0126 13:48:22.497250 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=2.281959033 podStartE2EDuration="2.497230733s" podCreationTimestamp="2026-01-26 13:48:20 +0000 UTC" firstStartedPulling="2026-01-26 13:48:21.397356284 +0000 UTC m=+3878.330723896" lastFinishedPulling="2026-01-26 13:48:21.612627984 +0000 UTC m=+3878.545995596" observedRunningTime="2026-01-26 13:48:22.497196313 +0000 UTC m=+3879.430563935" watchObservedRunningTime="2026-01-26 13:48:22.497230733 +0000 UTC m=+3879.430598355" Jan 26 13:48:22 crc kubenswrapper[4844]: I0126 13:48:22.525455 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=2.296698201 podStartE2EDuration="2.525430498s" podCreationTimestamp="2026-01-26 13:48:20 +0000 UTC" firstStartedPulling="2026-01-26 13:48:21.3889843 +0000 UTC m=+3878.322351932" lastFinishedPulling="2026-01-26 13:48:21.617716617 +0000 UTC m=+3878.551084229" observedRunningTime="2026-01-26 13:48:22.5164744 +0000 UTC m=+3879.449842032" watchObservedRunningTime="2026-01-26 13:48:22.525430498 +0000 UTC m=+3879.458798120" Jan 26 13:48:25 crc kubenswrapper[4844]: I0126 13:48:25.336407 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 26 13:48:25 crc kubenswrapper[4844]: I0126 13:48:25.394125 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:25 crc kubenswrapper[4844]: I0126 13:48:25.464518 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:26 crc kubenswrapper[4844]: I0126 13:48:26.313971 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:48:26 crc kubenswrapper[4844]: E0126 13:48:26.314624 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:48:30 crc kubenswrapper[4844]: I0126 13:48:30.471334 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 26 13:48:30 crc kubenswrapper[4844]: I0126 13:48:30.691258 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Jan 26 13:48:30 crc kubenswrapper[4844]: I0126 13:48:30.767217 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Jan 26 13:48:37 crc kubenswrapper[4844]: I0126 13:48:37.313752 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:48:37 crc kubenswrapper[4844]: E0126 13:48:37.314803 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:48:52 crc kubenswrapper[4844]: I0126 13:48:52.313929 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:48:52 crc kubenswrapper[4844]: E0126 13:48:52.314684 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:49:04 crc kubenswrapper[4844]: I0126 13:49:04.313530 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:49:04 crc kubenswrapper[4844]: E0126 13:49:04.314319 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:49:04 crc kubenswrapper[4844]: I0126 13:49:04.967806 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tq96c"] Jan 26 13:49:04 crc kubenswrapper[4844]: I0126 13:49:04.970124 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq96c" Jan 26 13:49:04 crc kubenswrapper[4844]: I0126 13:49:04.999609 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tq96c"] Jan 26 13:49:05 crc kubenswrapper[4844]: I0126 13:49:05.041463 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ba00614-5b71-4d81-be95-6adf72b5e992-catalog-content\") pod \"certified-operators-tq96c\" (UID: \"9ba00614-5b71-4d81-be95-6adf72b5e992\") " pod="openshift-marketplace/certified-operators-tq96c" Jan 26 13:49:05 crc kubenswrapper[4844]: I0126 13:49:05.042414 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ba00614-5b71-4d81-be95-6adf72b5e992-utilities\") pod \"certified-operators-tq96c\" (UID: \"9ba00614-5b71-4d81-be95-6adf72b5e992\") " pod="openshift-marketplace/certified-operators-tq96c" Jan 26 13:49:05 crc kubenswrapper[4844]: I0126 13:49:05.042524 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkhz4\" (UniqueName: \"kubernetes.io/projected/9ba00614-5b71-4d81-be95-6adf72b5e992-kube-api-access-nkhz4\") pod \"certified-operators-tq96c\" (UID: \"9ba00614-5b71-4d81-be95-6adf72b5e992\") " pod="openshift-marketplace/certified-operators-tq96c" Jan 26 13:49:05 crc kubenswrapper[4844]: I0126 13:49:05.144090 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ba00614-5b71-4d81-be95-6adf72b5e992-catalog-content\") pod \"certified-operators-tq96c\" (UID: \"9ba00614-5b71-4d81-be95-6adf72b5e992\") " pod="openshift-marketplace/certified-operators-tq96c" Jan 26 13:49:05 crc kubenswrapper[4844]: I0126 13:49:05.144231 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ba00614-5b71-4d81-be95-6adf72b5e992-utilities\") pod \"certified-operators-tq96c\" (UID: \"9ba00614-5b71-4d81-be95-6adf72b5e992\") " pod="openshift-marketplace/certified-operators-tq96c" Jan 26 13:49:05 crc kubenswrapper[4844]: I0126 13:49:05.144279 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkhz4\" (UniqueName: \"kubernetes.io/projected/9ba00614-5b71-4d81-be95-6adf72b5e992-kube-api-access-nkhz4\") pod \"certified-operators-tq96c\" (UID: \"9ba00614-5b71-4d81-be95-6adf72b5e992\") " pod="openshift-marketplace/certified-operators-tq96c" Jan 26 13:49:05 crc kubenswrapper[4844]: I0126 13:49:05.144906 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ba00614-5b71-4d81-be95-6adf72b5e992-catalog-content\") pod \"certified-operators-tq96c\" (UID: \"9ba00614-5b71-4d81-be95-6adf72b5e992\") " pod="openshift-marketplace/certified-operators-tq96c" Jan 26 13:49:05 crc kubenswrapper[4844]: I0126 13:49:05.145050 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ba00614-5b71-4d81-be95-6adf72b5e992-utilities\") pod \"certified-operators-tq96c\" (UID: \"9ba00614-5b71-4d81-be95-6adf72b5e992\") " pod="openshift-marketplace/certified-operators-tq96c" Jan 26 13:49:05 crc kubenswrapper[4844]: I0126 13:49:05.166577 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkhz4\" (UniqueName: \"kubernetes.io/projected/9ba00614-5b71-4d81-be95-6adf72b5e992-kube-api-access-nkhz4\") pod \"certified-operators-tq96c\" (UID: \"9ba00614-5b71-4d81-be95-6adf72b5e992\") " pod="openshift-marketplace/certified-operators-tq96c" Jan 26 13:49:05 crc kubenswrapper[4844]: I0126 13:49:05.299126 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq96c" Jan 26 13:49:05 crc kubenswrapper[4844]: I0126 13:49:05.879668 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tq96c"] Jan 26 13:49:05 crc kubenswrapper[4844]: I0126 13:49:05.933273 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq96c" event={"ID":"9ba00614-5b71-4d81-be95-6adf72b5e992","Type":"ContainerStarted","Data":"8ee337a150f2e0fbd9fb9638c127b95e8367a4603e77bdf3dbf181bb823f9208"} Jan 26 13:49:06 crc kubenswrapper[4844]: I0126 13:49:06.950086 4844 generic.go:334] "Generic (PLEG): container finished" podID="9ba00614-5b71-4d81-be95-6adf72b5e992" containerID="7ca7dbb56f354869a6d313811141b26c05a2e655529f7fbca1035f60863be87c" exitCode=0 Jan 26 13:49:06 crc kubenswrapper[4844]: I0126 13:49:06.950218 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq96c" event={"ID":"9ba00614-5b71-4d81-be95-6adf72b5e992","Type":"ContainerDied","Data":"7ca7dbb56f354869a6d313811141b26c05a2e655529f7fbca1035f60863be87c"} Jan 26 13:49:07 crc kubenswrapper[4844]: I0126 13:49:07.965836 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq96c" event={"ID":"9ba00614-5b71-4d81-be95-6adf72b5e992","Type":"ContainerStarted","Data":"de65059b22371b4a67589a49032d7c8f51642c618025db2c47f981b72bba3e0c"} Jan 26 13:49:08 crc kubenswrapper[4844]: I0126 13:49:08.976303 4844 generic.go:334] "Generic (PLEG): container finished" podID="9ba00614-5b71-4d81-be95-6adf72b5e992" containerID="de65059b22371b4a67589a49032d7c8f51642c618025db2c47f981b72bba3e0c" exitCode=0 Jan 26 13:49:08 crc kubenswrapper[4844]: I0126 13:49:08.976384 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq96c" event={"ID":"9ba00614-5b71-4d81-be95-6adf72b5e992","Type":"ContainerDied","Data":"de65059b22371b4a67589a49032d7c8f51642c618025db2c47f981b72bba3e0c"} Jan 26 13:49:09 crc kubenswrapper[4844]: I0126 13:49:09.988918 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq96c" event={"ID":"9ba00614-5b71-4d81-be95-6adf72b5e992","Type":"ContainerStarted","Data":"4d48b859e0b89b5580deb8ffbb4085b892581f436fdfc6e3e1896ae81df55721"} Jan 26 13:49:10 crc kubenswrapper[4844]: I0126 13:49:10.013795 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tq96c" podStartSLOduration=3.460186312 podStartE2EDuration="6.013775014s" podCreationTimestamp="2026-01-26 13:49:04 +0000 UTC" firstStartedPulling="2026-01-26 13:49:06.952753599 +0000 UTC m=+3923.886121221" lastFinishedPulling="2026-01-26 13:49:09.506342311 +0000 UTC m=+3926.439709923" observedRunningTime="2026-01-26 13:49:10.006613749 +0000 UTC m=+3926.939981381" watchObservedRunningTime="2026-01-26 13:49:10.013775014 +0000 UTC m=+3926.947142636" Jan 26 13:49:15 crc kubenswrapper[4844]: I0126 13:49:15.299868 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tq96c" Jan 26 13:49:15 crc kubenswrapper[4844]: I0126 13:49:15.300399 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tq96c" Jan 26 13:49:15 crc kubenswrapper[4844]: I0126 13:49:15.393484 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tq96c" Jan 26 13:49:16 crc kubenswrapper[4844]: I0126 13:49:16.099472 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tq96c" Jan 26 13:49:16 crc kubenswrapper[4844]: I0126 13:49:16.162383 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tq96c"] Jan 26 13:49:18 crc kubenswrapper[4844]: I0126 13:49:18.074607 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tq96c" podUID="9ba00614-5b71-4d81-be95-6adf72b5e992" containerName="registry-server" containerID="cri-o://4d48b859e0b89b5580deb8ffbb4085b892581f436fdfc6e3e1896ae81df55721" gracePeriod=2 Jan 26 13:49:18 crc kubenswrapper[4844]: I0126 13:49:18.313558 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:49:18 crc kubenswrapper[4844]: I0126 13:49:18.959995 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq96c" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.084324 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"7e6f2d77087958f205aeeab162bc40d9fca5be66573603444ac53e2983274b58"} Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.087767 4844 generic.go:334] "Generic (PLEG): container finished" podID="9ba00614-5b71-4d81-be95-6adf72b5e992" containerID="4d48b859e0b89b5580deb8ffbb4085b892581f436fdfc6e3e1896ae81df55721" exitCode=0 Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.087800 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq96c" event={"ID":"9ba00614-5b71-4d81-be95-6adf72b5e992","Type":"ContainerDied","Data":"4d48b859e0b89b5580deb8ffbb4085b892581f436fdfc6e3e1896ae81df55721"} Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.087823 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq96c" event={"ID":"9ba00614-5b71-4d81-be95-6adf72b5e992","Type":"ContainerDied","Data":"8ee337a150f2e0fbd9fb9638c127b95e8367a4603e77bdf3dbf181bb823f9208"} Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.087837 4844 scope.go:117] "RemoveContainer" containerID="4d48b859e0b89b5580deb8ffbb4085b892581f436fdfc6e3e1896ae81df55721" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.087942 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq96c" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.099002 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ba00614-5b71-4d81-be95-6adf72b5e992-utilities\") pod \"9ba00614-5b71-4d81-be95-6adf72b5e992\" (UID: \"9ba00614-5b71-4d81-be95-6adf72b5e992\") " Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.099090 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ba00614-5b71-4d81-be95-6adf72b5e992-catalog-content\") pod \"9ba00614-5b71-4d81-be95-6adf72b5e992\" (UID: \"9ba00614-5b71-4d81-be95-6adf72b5e992\") " Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.099400 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkhz4\" (UniqueName: \"kubernetes.io/projected/9ba00614-5b71-4d81-be95-6adf72b5e992-kube-api-access-nkhz4\") pod \"9ba00614-5b71-4d81-be95-6adf72b5e992\" (UID: \"9ba00614-5b71-4d81-be95-6adf72b5e992\") " Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.100740 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ba00614-5b71-4d81-be95-6adf72b5e992-utilities" (OuterVolumeSpecName: "utilities") pod "9ba00614-5b71-4d81-be95-6adf72b5e992" (UID: "9ba00614-5b71-4d81-be95-6adf72b5e992"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.101587 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ba00614-5b71-4d81-be95-6adf72b5e992-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.120551 4844 scope.go:117] "RemoveContainer" containerID="de65059b22371b4a67589a49032d7c8f51642c618025db2c47f981b72bba3e0c" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.155424 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ba00614-5b71-4d81-be95-6adf72b5e992-kube-api-access-nkhz4" (OuterVolumeSpecName: "kube-api-access-nkhz4") pod "9ba00614-5b71-4d81-be95-6adf72b5e992" (UID: "9ba00614-5b71-4d81-be95-6adf72b5e992"). InnerVolumeSpecName "kube-api-access-nkhz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.160304 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ba00614-5b71-4d81-be95-6adf72b5e992-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ba00614-5b71-4d81-be95-6adf72b5e992" (UID: "9ba00614-5b71-4d81-be95-6adf72b5e992"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.180828 4844 scope.go:117] "RemoveContainer" containerID="7ca7dbb56f354869a6d313811141b26c05a2e655529f7fbca1035f60863be87c" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.212364 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkhz4\" (UniqueName: \"kubernetes.io/projected/9ba00614-5b71-4d81-be95-6adf72b5e992-kube-api-access-nkhz4\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.212615 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ba00614-5b71-4d81-be95-6adf72b5e992-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.225225 4844 scope.go:117] "RemoveContainer" containerID="4d48b859e0b89b5580deb8ffbb4085b892581f436fdfc6e3e1896ae81df55721" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.226787 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6n5gp"] Jan 26 13:49:19 crc kubenswrapper[4844]: E0126 13:49:19.227502 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ba00614-5b71-4d81-be95-6adf72b5e992" containerName="extract-content" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.227538 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ba00614-5b71-4d81-be95-6adf72b5e992" containerName="extract-content" Jan 26 13:49:19 crc kubenswrapper[4844]: E0126 13:49:19.227578 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ba00614-5b71-4d81-be95-6adf72b5e992" containerName="extract-utilities" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.227588 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ba00614-5b71-4d81-be95-6adf72b5e992" containerName="extract-utilities" Jan 26 13:49:19 crc kubenswrapper[4844]: E0126 13:49:19.227624 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ba00614-5b71-4d81-be95-6adf72b5e992" containerName="registry-server" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.227631 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ba00614-5b71-4d81-be95-6adf72b5e992" containerName="registry-server" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.228002 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ba00614-5b71-4d81-be95-6adf72b5e992" containerName="registry-server" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.229483 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6n5gp" Jan 26 13:49:19 crc kubenswrapper[4844]: E0126 13:49:19.236501 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d48b859e0b89b5580deb8ffbb4085b892581f436fdfc6e3e1896ae81df55721\": container with ID starting with 4d48b859e0b89b5580deb8ffbb4085b892581f436fdfc6e3e1896ae81df55721 not found: ID does not exist" containerID="4d48b859e0b89b5580deb8ffbb4085b892581f436fdfc6e3e1896ae81df55721" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.236555 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d48b859e0b89b5580deb8ffbb4085b892581f436fdfc6e3e1896ae81df55721"} err="failed to get container status \"4d48b859e0b89b5580deb8ffbb4085b892581f436fdfc6e3e1896ae81df55721\": rpc error: code = NotFound desc = could not find container \"4d48b859e0b89b5580deb8ffbb4085b892581f436fdfc6e3e1896ae81df55721\": container with ID starting with 4d48b859e0b89b5580deb8ffbb4085b892581f436fdfc6e3e1896ae81df55721 not found: ID does not exist" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.236590 4844 scope.go:117] "RemoveContainer" containerID="de65059b22371b4a67589a49032d7c8f51642c618025db2c47f981b72bba3e0c" Jan 26 13:49:19 crc kubenswrapper[4844]: E0126 13:49:19.237128 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de65059b22371b4a67589a49032d7c8f51642c618025db2c47f981b72bba3e0c\": container with ID starting with de65059b22371b4a67589a49032d7c8f51642c618025db2c47f981b72bba3e0c not found: ID does not exist" containerID="de65059b22371b4a67589a49032d7c8f51642c618025db2c47f981b72bba3e0c" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.237175 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de65059b22371b4a67589a49032d7c8f51642c618025db2c47f981b72bba3e0c"} err="failed to get container status \"de65059b22371b4a67589a49032d7c8f51642c618025db2c47f981b72bba3e0c\": rpc error: code = NotFound desc = could not find container \"de65059b22371b4a67589a49032d7c8f51642c618025db2c47f981b72bba3e0c\": container with ID starting with de65059b22371b4a67589a49032d7c8f51642c618025db2c47f981b72bba3e0c not found: ID does not exist" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.237210 4844 scope.go:117] "RemoveContainer" containerID="7ca7dbb56f354869a6d313811141b26c05a2e655529f7fbca1035f60863be87c" Jan 26 13:49:19 crc kubenswrapper[4844]: E0126 13:49:19.237572 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ca7dbb56f354869a6d313811141b26c05a2e655529f7fbca1035f60863be87c\": container with ID starting with 7ca7dbb56f354869a6d313811141b26c05a2e655529f7fbca1035f60863be87c not found: ID does not exist" containerID="7ca7dbb56f354869a6d313811141b26c05a2e655529f7fbca1035f60863be87c" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.237622 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ca7dbb56f354869a6d313811141b26c05a2e655529f7fbca1035f60863be87c"} err="failed to get container status \"7ca7dbb56f354869a6d313811141b26c05a2e655529f7fbca1035f60863be87c\": rpc error: code = NotFound desc = could not find container \"7ca7dbb56f354869a6d313811141b26c05a2e655529f7fbca1035f60863be87c\": container with ID starting with 7ca7dbb56f354869a6d313811141b26c05a2e655529f7fbca1035f60863be87c not found: ID does not exist" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.255667 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6n5gp"] Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.317757 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crzt5\" (UniqueName: \"kubernetes.io/projected/d5ffefc7-c4df-42f9-81e0-94c6dd85837c-kube-api-access-crzt5\") pod \"community-operators-6n5gp\" (UID: \"d5ffefc7-c4df-42f9-81e0-94c6dd85837c\") " pod="openshift-marketplace/community-operators-6n5gp" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.317932 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ffefc7-c4df-42f9-81e0-94c6dd85837c-catalog-content\") pod \"community-operators-6n5gp\" (UID: \"d5ffefc7-c4df-42f9-81e0-94c6dd85837c\") " pod="openshift-marketplace/community-operators-6n5gp" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.317970 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ffefc7-c4df-42f9-81e0-94c6dd85837c-utilities\") pod \"community-operators-6n5gp\" (UID: \"d5ffefc7-c4df-42f9-81e0-94c6dd85837c\") " pod="openshift-marketplace/community-operators-6n5gp" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.420314 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tq96c"] Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.421170 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crzt5\" (UniqueName: \"kubernetes.io/projected/d5ffefc7-c4df-42f9-81e0-94c6dd85837c-kube-api-access-crzt5\") pod \"community-operators-6n5gp\" (UID: \"d5ffefc7-c4df-42f9-81e0-94c6dd85837c\") " pod="openshift-marketplace/community-operators-6n5gp" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.421341 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ffefc7-c4df-42f9-81e0-94c6dd85837c-catalog-content\") pod \"community-operators-6n5gp\" (UID: \"d5ffefc7-c4df-42f9-81e0-94c6dd85837c\") " pod="openshift-marketplace/community-operators-6n5gp" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.421377 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ffefc7-c4df-42f9-81e0-94c6dd85837c-utilities\") pod \"community-operators-6n5gp\" (UID: \"d5ffefc7-c4df-42f9-81e0-94c6dd85837c\") " pod="openshift-marketplace/community-operators-6n5gp" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.421802 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ffefc7-c4df-42f9-81e0-94c6dd85837c-utilities\") pod \"community-operators-6n5gp\" (UID: \"d5ffefc7-c4df-42f9-81e0-94c6dd85837c\") " pod="openshift-marketplace/community-operators-6n5gp" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.422319 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ffefc7-c4df-42f9-81e0-94c6dd85837c-catalog-content\") pod \"community-operators-6n5gp\" (UID: \"d5ffefc7-c4df-42f9-81e0-94c6dd85837c\") " pod="openshift-marketplace/community-operators-6n5gp" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.430713 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tq96c"] Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.448868 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crzt5\" (UniqueName: \"kubernetes.io/projected/d5ffefc7-c4df-42f9-81e0-94c6dd85837c-kube-api-access-crzt5\") pod \"community-operators-6n5gp\" (UID: \"d5ffefc7-c4df-42f9-81e0-94c6dd85837c\") " pod="openshift-marketplace/community-operators-6n5gp" Jan 26 13:49:19 crc kubenswrapper[4844]: I0126 13:49:19.560319 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6n5gp" Jan 26 13:49:20 crc kubenswrapper[4844]: I0126 13:49:20.184502 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6n5gp"] Jan 26 13:49:21 crc kubenswrapper[4844]: I0126 13:49:21.108731 4844 generic.go:334] "Generic (PLEG): container finished" podID="d5ffefc7-c4df-42f9-81e0-94c6dd85837c" containerID="53f9dd58ef567ccc190725f2d878f6222cd142ee0aac933892eef5c2f936f4a4" exitCode=0 Jan 26 13:49:21 crc kubenswrapper[4844]: I0126 13:49:21.108827 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6n5gp" event={"ID":"d5ffefc7-c4df-42f9-81e0-94c6dd85837c","Type":"ContainerDied","Data":"53f9dd58ef567ccc190725f2d878f6222cd142ee0aac933892eef5c2f936f4a4"} Jan 26 13:49:21 crc kubenswrapper[4844]: I0126 13:49:21.109093 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6n5gp" event={"ID":"d5ffefc7-c4df-42f9-81e0-94c6dd85837c","Type":"ContainerStarted","Data":"e3418bb990f330a3c4f247b4f5a93fb1afc963760056a85e69f3b3f44cf793d0"} Jan 26 13:49:21 crc kubenswrapper[4844]: I0126 13:49:21.351892 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ba00614-5b71-4d81-be95-6adf72b5e992" path="/var/lib/kubelet/pods/9ba00614-5b71-4d81-be95-6adf72b5e992/volumes" Jan 26 13:49:22 crc kubenswrapper[4844]: I0126 13:49:22.120274 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6n5gp" event={"ID":"d5ffefc7-c4df-42f9-81e0-94c6dd85837c","Type":"ContainerStarted","Data":"f38f86f8b632b682720321562d0a87204f803c9eb9ee780092159714825a78c7"} Jan 26 13:49:23 crc kubenswrapper[4844]: I0126 13:49:23.133399 4844 generic.go:334] "Generic (PLEG): container finished" podID="d5ffefc7-c4df-42f9-81e0-94c6dd85837c" containerID="f38f86f8b632b682720321562d0a87204f803c9eb9ee780092159714825a78c7" exitCode=0 Jan 26 13:49:23 crc kubenswrapper[4844]: I0126 13:49:23.133440 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6n5gp" event={"ID":"d5ffefc7-c4df-42f9-81e0-94c6dd85837c","Type":"ContainerDied","Data":"f38f86f8b632b682720321562d0a87204f803c9eb9ee780092159714825a78c7"} Jan 26 13:49:24 crc kubenswrapper[4844]: I0126 13:49:24.144977 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6n5gp" event={"ID":"d5ffefc7-c4df-42f9-81e0-94c6dd85837c","Type":"ContainerStarted","Data":"1034307c6759212b81b2db1a2e71c8ca34997940b413fee0f402925cfe9cc236"} Jan 26 13:49:24 crc kubenswrapper[4844]: I0126 13:49:24.177475 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6n5gp" podStartSLOduration=2.7367489579999997 podStartE2EDuration="5.177456376s" podCreationTimestamp="2026-01-26 13:49:19 +0000 UTC" firstStartedPulling="2026-01-26 13:49:21.110731143 +0000 UTC m=+3938.044098755" lastFinishedPulling="2026-01-26 13:49:23.551438521 +0000 UTC m=+3940.484806173" observedRunningTime="2026-01-26 13:49:24.169308079 +0000 UTC m=+3941.102675691" watchObservedRunningTime="2026-01-26 13:49:24.177456376 +0000 UTC m=+3941.110823988" Jan 26 13:49:24 crc kubenswrapper[4844]: I0126 13:49:24.740873 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:49:24 crc kubenswrapper[4844]: I0126 13:49:24.741222 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" containerName="prometheus" containerID="cri-o://5706603a7bdd9fb5cd16976e4ca7aca5c36f785505f27a1d5f949b08e7241b62" gracePeriod=600 Jan 26 13:49:24 crc kubenswrapper[4844]: I0126 13:49:24.741399 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" containerName="config-reloader" containerID="cri-o://757e7ac121a5ac2d5117a9e4d706d94a8c98cc3ab12ddb64dec2c8c4d9e729fb" gracePeriod=600 Jan 26 13:49:24 crc kubenswrapper[4844]: I0126 13:49:24.741294 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" containerName="thanos-sidecar" containerID="cri-o://16de62b26afafaaee1f6a069b2507522c4143d9fed128422110635385872593f" gracePeriod=600 Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.155497 4844 generic.go:334] "Generic (PLEG): container finished" podID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" containerID="16de62b26afafaaee1f6a069b2507522c4143d9fed128422110635385872593f" exitCode=0 Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.155794 4844 generic.go:334] "Generic (PLEG): container finished" podID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" containerID="757e7ac121a5ac2d5117a9e4d706d94a8c98cc3ab12ddb64dec2c8c4d9e729fb" exitCode=0 Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.155802 4844 generic.go:334] "Generic (PLEG): container finished" podID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" containerID="5706603a7bdd9fb5cd16976e4ca7aca5c36f785505f27a1d5f949b08e7241b62" exitCode=0 Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.156701 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aefdcbbc-2ac1-43d5-b70c-26e89000ab98","Type":"ContainerDied","Data":"16de62b26afafaaee1f6a069b2507522c4143d9fed128422110635385872593f"} Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.156826 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aefdcbbc-2ac1-43d5-b70c-26e89000ab98","Type":"ContainerDied","Data":"757e7ac121a5ac2d5117a9e4d706d94a8c98cc3ab12ddb64dec2c8c4d9e729fb"} Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.156890 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aefdcbbc-2ac1-43d5-b70c-26e89000ab98","Type":"ContainerDied","Data":"5706603a7bdd9fb5cd16976e4ca7aca5c36f785505f27a1d5f949b08e7241b62"} Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.725888 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.801742 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-prometheus-metric-storage-rulefiles-1\") pod \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.801825 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-tls-assets\") pod \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.801857 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-prometheus-metric-storage-rulefiles-2\") pod \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.801919 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.801959 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpfhz\" (UniqueName: \"kubernetes.io/projected/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-kube-api-access-mpfhz\") pod \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.802019 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-prometheus-metric-storage-rulefiles-0\") pod \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.802069 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-config-out\") pod \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.802090 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-web-config\") pod \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.802285 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\") pod \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.802342 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.802416 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-secret-combined-ca-bundle\") pod \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.802459 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-thanos-prometheus-http-client-file\") pod \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.802486 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-config\") pod \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\" (UID: \"aefdcbbc-2ac1-43d5-b70c-26e89000ab98\") " Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.802844 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "aefdcbbc-2ac1-43d5-b70c-26e89000ab98" (UID: "aefdcbbc-2ac1-43d5-b70c-26e89000ab98"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.803553 4844 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.807004 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "aefdcbbc-2ac1-43d5-b70c-26e89000ab98" (UID: "aefdcbbc-2ac1-43d5-b70c-26e89000ab98"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.807486 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "aefdcbbc-2ac1-43d5-b70c-26e89000ab98" (UID: "aefdcbbc-2ac1-43d5-b70c-26e89000ab98"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.809001 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "aefdcbbc-2ac1-43d5-b70c-26e89000ab98" (UID: "aefdcbbc-2ac1-43d5-b70c-26e89000ab98"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.809814 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "aefdcbbc-2ac1-43d5-b70c-26e89000ab98" (UID: "aefdcbbc-2ac1-43d5-b70c-26e89000ab98"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.810307 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-config-out" (OuterVolumeSpecName: "config-out") pod "aefdcbbc-2ac1-43d5-b70c-26e89000ab98" (UID: "aefdcbbc-2ac1-43d5-b70c-26e89000ab98"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.812026 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-kube-api-access-mpfhz" (OuterVolumeSpecName: "kube-api-access-mpfhz") pod "aefdcbbc-2ac1-43d5-b70c-26e89000ab98" (UID: "aefdcbbc-2ac1-43d5-b70c-26e89000ab98"). InnerVolumeSpecName "kube-api-access-mpfhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.813705 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-config" (OuterVolumeSpecName: "config") pod "aefdcbbc-2ac1-43d5-b70c-26e89000ab98" (UID: "aefdcbbc-2ac1-43d5-b70c-26e89000ab98"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.819913 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "aefdcbbc-2ac1-43d5-b70c-26e89000ab98" (UID: "aefdcbbc-2ac1-43d5-b70c-26e89000ab98"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.820518 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "aefdcbbc-2ac1-43d5-b70c-26e89000ab98" (UID: "aefdcbbc-2ac1-43d5-b70c-26e89000ab98"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.831806 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "aefdcbbc-2ac1-43d5-b70c-26e89000ab98" (UID: "aefdcbbc-2ac1-43d5-b70c-26e89000ab98"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.837093 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "aefdcbbc-2ac1-43d5-b70c-26e89000ab98" (UID: "aefdcbbc-2ac1-43d5-b70c-26e89000ab98"). InnerVolumeSpecName "pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.905938 4844 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.905977 4844 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.905989 4844 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.906002 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpfhz\" (UniqueName: \"kubernetes.io/projected/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-kube-api-access-mpfhz\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.906012 4844 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.906021 4844 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-config-out\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.906057 4844 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\") on node \"crc\" " Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.906067 4844 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.906077 4844 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.906085 4844 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.906093 4844 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.914697 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-web-config" (OuterVolumeSpecName: "web-config") pod "aefdcbbc-2ac1-43d5-b70c-26e89000ab98" (UID: "aefdcbbc-2ac1-43d5-b70c-26e89000ab98"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.930988 4844 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 13:49:25 crc kubenswrapper[4844]: I0126 13:49:25.931425 4844 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f") on node "crc" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.007679 4844 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aefdcbbc-2ac1-43d5-b70c-26e89000ab98-web-config\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.007713 4844 reconciler_common.go:293] "Volume detached for volume \"pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.173275 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aefdcbbc-2ac1-43d5-b70c-26e89000ab98","Type":"ContainerDied","Data":"66304c26ce77d93a8a1899a9f7eac51156441026be0ebb6f0d41ce1bc8e22f5a"} Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.173335 4844 scope.go:117] "RemoveContainer" containerID="16de62b26afafaaee1f6a069b2507522c4143d9fed128422110635385872593f" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.173358 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.209019 4844 scope.go:117] "RemoveContainer" containerID="757e7ac121a5ac2d5117a9e4d706d94a8c98cc3ab12ddb64dec2c8c4d9e729fb" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.219660 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.227524 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.238328 4844 scope.go:117] "RemoveContainer" containerID="5706603a7bdd9fb5cd16976e4ca7aca5c36f785505f27a1d5f949b08e7241b62" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.251810 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:49:26 crc kubenswrapper[4844]: E0126 13:49:26.252258 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" containerName="init-config-reloader" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.253658 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" containerName="init-config-reloader" Jan 26 13:49:26 crc kubenswrapper[4844]: E0126 13:49:26.253692 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" containerName="thanos-sidecar" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.253700 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" containerName="thanos-sidecar" Jan 26 13:49:26 crc kubenswrapper[4844]: E0126 13:49:26.253713 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" containerName="config-reloader" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.253720 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" containerName="config-reloader" Jan 26 13:49:26 crc kubenswrapper[4844]: E0126 13:49:26.253751 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" containerName="prometheus" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.253759 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" containerName="prometheus" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.254034 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" containerName="thanos-sidecar" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.254072 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" containerName="config-reloader" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.254081 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" containerName="prometheus" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.256177 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.263952 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.264096 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.264203 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.264331 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.264390 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.265833 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-lh4xm" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.266145 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.270782 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.272334 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.276897 4844 scope.go:117] "RemoveContainer" containerID="ad4d7ee909f9a18453c4656d4bd6f78bf7e01fcb4dd6c1d698354d192e0704b2" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.316627 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.316883 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fcca7d88-f1d4-463b-a412-ecfee5f8724d-config\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.316926 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcca7d88-f1d4-463b-a412-ecfee5f8724d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.316993 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fcca7d88-f1d4-463b-a412-ecfee5f8724d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.317405 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fcca7d88-f1d4-463b-a412-ecfee5f8724d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.317581 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8zst\" (UniqueName: \"kubernetes.io/projected/fcca7d88-f1d4-463b-a412-ecfee5f8724d-kube-api-access-r8zst\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.317654 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fcca7d88-f1d4-463b-a412-ecfee5f8724d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.317695 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fcca7d88-f1d4-463b-a412-ecfee5f8724d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.317769 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fcca7d88-f1d4-463b-a412-ecfee5f8724d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.317807 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fcca7d88-f1d4-463b-a412-ecfee5f8724d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.317864 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fcca7d88-f1d4-463b-a412-ecfee5f8724d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.317894 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fcca7d88-f1d4-463b-a412-ecfee5f8724d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.317963 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fcca7d88-f1d4-463b-a412-ecfee5f8724d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.420095 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fcca7d88-f1d4-463b-a412-ecfee5f8724d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.420177 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fcca7d88-f1d4-463b-a412-ecfee5f8724d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.420211 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8zst\" (UniqueName: \"kubernetes.io/projected/fcca7d88-f1d4-463b-a412-ecfee5f8724d-kube-api-access-r8zst\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.420239 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fcca7d88-f1d4-463b-a412-ecfee5f8724d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.420276 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fcca7d88-f1d4-463b-a412-ecfee5f8724d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.420316 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fcca7d88-f1d4-463b-a412-ecfee5f8724d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.420338 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fcca7d88-f1d4-463b-a412-ecfee5f8724d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.420381 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fcca7d88-f1d4-463b-a412-ecfee5f8724d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.420402 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fcca7d88-f1d4-463b-a412-ecfee5f8724d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.420458 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fcca7d88-f1d4-463b-a412-ecfee5f8724d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.420523 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.420590 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fcca7d88-f1d4-463b-a412-ecfee5f8724d-config\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.420649 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcca7d88-f1d4-463b-a412-ecfee5f8724d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.421319 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fcca7d88-f1d4-463b-a412-ecfee5f8724d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.421994 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fcca7d88-f1d4-463b-a412-ecfee5f8724d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.422000 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fcca7d88-f1d4-463b-a412-ecfee5f8724d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.423401 4844 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.423437 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/60456fde86fe7a040b59fc70316475c6486458b501f0e0cd47e77b114ad32f41/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.426090 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fcca7d88-f1d4-463b-a412-ecfee5f8724d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.426149 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fcca7d88-f1d4-463b-a412-ecfee5f8724d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.427060 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fcca7d88-f1d4-463b-a412-ecfee5f8724d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.428274 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fcca7d88-f1d4-463b-a412-ecfee5f8724d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.430203 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fcca7d88-f1d4-463b-a412-ecfee5f8724d-config\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.431706 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fcca7d88-f1d4-463b-a412-ecfee5f8724d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.434280 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcca7d88-f1d4-463b-a412-ecfee5f8724d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.437326 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8zst\" (UniqueName: \"kubernetes.io/projected/fcca7d88-f1d4-463b-a412-ecfee5f8724d-kube-api-access-r8zst\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.470191 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fcca7d88-f1d4-463b-a412-ecfee5f8724d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.495376 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19181cb9-d02a-4fb9-9922-1c51ae1db65f\") pod \"prometheus-metric-storage-0\" (UID: \"fcca7d88-f1d4-463b-a412-ecfee5f8724d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.576021 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:26 crc kubenswrapper[4844]: I0126 13:49:26.912974 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 13:49:26 crc kubenswrapper[4844]: W0126 13:49:26.922211 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcca7d88_f1d4_463b_a412_ecfee5f8724d.slice/crio-022c4ccc1af00ab71335a53228466a3666f75e7827c94d4ce86d9f7b6db822df WatchSource:0}: Error finding container 022c4ccc1af00ab71335a53228466a3666f75e7827c94d4ce86d9f7b6db822df: Status 404 returned error can't find the container with id 022c4ccc1af00ab71335a53228466a3666f75e7827c94d4ce86d9f7b6db822df Jan 26 13:49:27 crc kubenswrapper[4844]: I0126 13:49:27.183270 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fcca7d88-f1d4-463b-a412-ecfee5f8724d","Type":"ContainerStarted","Data":"022c4ccc1af00ab71335a53228466a3666f75e7827c94d4ce86d9f7b6db822df"} Jan 26 13:49:27 crc kubenswrapper[4844]: I0126 13:49:27.326073 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aefdcbbc-2ac1-43d5-b70c-26e89000ab98" path="/var/lib/kubelet/pods/aefdcbbc-2ac1-43d5-b70c-26e89000ab98/volumes" Jan 26 13:49:29 crc kubenswrapper[4844]: I0126 13:49:29.561386 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6n5gp" Jan 26 13:49:29 crc kubenswrapper[4844]: I0126 13:49:29.561898 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6n5gp" Jan 26 13:49:29 crc kubenswrapper[4844]: I0126 13:49:29.625714 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6n5gp" Jan 26 13:49:30 crc kubenswrapper[4844]: I0126 13:49:30.272498 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6n5gp" Jan 26 13:49:30 crc kubenswrapper[4844]: I0126 13:49:30.348270 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6n5gp"] Jan 26 13:49:32 crc kubenswrapper[4844]: I0126 13:49:32.246070 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6n5gp" podUID="d5ffefc7-c4df-42f9-81e0-94c6dd85837c" containerName="registry-server" containerID="cri-o://1034307c6759212b81b2db1a2e71c8ca34997940b413fee0f402925cfe9cc236" gracePeriod=2 Jan 26 13:49:32 crc kubenswrapper[4844]: I0126 13:49:32.789669 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6n5gp" Jan 26 13:49:32 crc kubenswrapper[4844]: I0126 13:49:32.849819 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crzt5\" (UniqueName: \"kubernetes.io/projected/d5ffefc7-c4df-42f9-81e0-94c6dd85837c-kube-api-access-crzt5\") pod \"d5ffefc7-c4df-42f9-81e0-94c6dd85837c\" (UID: \"d5ffefc7-c4df-42f9-81e0-94c6dd85837c\") " Jan 26 13:49:32 crc kubenswrapper[4844]: I0126 13:49:32.850093 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ffefc7-c4df-42f9-81e0-94c6dd85837c-utilities\") pod \"d5ffefc7-c4df-42f9-81e0-94c6dd85837c\" (UID: \"d5ffefc7-c4df-42f9-81e0-94c6dd85837c\") " Jan 26 13:49:32 crc kubenswrapper[4844]: I0126 13:49:32.850141 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ffefc7-c4df-42f9-81e0-94c6dd85837c-catalog-content\") pod \"d5ffefc7-c4df-42f9-81e0-94c6dd85837c\" (UID: \"d5ffefc7-c4df-42f9-81e0-94c6dd85837c\") " Jan 26 13:49:32 crc kubenswrapper[4844]: I0126 13:49:32.850811 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ffefc7-c4df-42f9-81e0-94c6dd85837c-utilities" (OuterVolumeSpecName: "utilities") pod "d5ffefc7-c4df-42f9-81e0-94c6dd85837c" (UID: "d5ffefc7-c4df-42f9-81e0-94c6dd85837c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:49:32 crc kubenswrapper[4844]: I0126 13:49:32.856111 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ffefc7-c4df-42f9-81e0-94c6dd85837c-kube-api-access-crzt5" (OuterVolumeSpecName: "kube-api-access-crzt5") pod "d5ffefc7-c4df-42f9-81e0-94c6dd85837c" (UID: "d5ffefc7-c4df-42f9-81e0-94c6dd85837c"). InnerVolumeSpecName "kube-api-access-crzt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:49:32 crc kubenswrapper[4844]: I0126 13:49:32.926122 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ffefc7-c4df-42f9-81e0-94c6dd85837c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5ffefc7-c4df-42f9-81e0-94c6dd85837c" (UID: "d5ffefc7-c4df-42f9-81e0-94c6dd85837c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:49:32 crc kubenswrapper[4844]: I0126 13:49:32.952275 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crzt5\" (UniqueName: \"kubernetes.io/projected/d5ffefc7-c4df-42f9-81e0-94c6dd85837c-kube-api-access-crzt5\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:32 crc kubenswrapper[4844]: I0126 13:49:32.952311 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ffefc7-c4df-42f9-81e0-94c6dd85837c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:32 crc kubenswrapper[4844]: I0126 13:49:32.952324 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ffefc7-c4df-42f9-81e0-94c6dd85837c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:49:33 crc kubenswrapper[4844]: I0126 13:49:33.261308 4844 generic.go:334] "Generic (PLEG): container finished" podID="d5ffefc7-c4df-42f9-81e0-94c6dd85837c" containerID="1034307c6759212b81b2db1a2e71c8ca34997940b413fee0f402925cfe9cc236" exitCode=0 Jan 26 13:49:33 crc kubenswrapper[4844]: I0126 13:49:33.261376 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6n5gp" event={"ID":"d5ffefc7-c4df-42f9-81e0-94c6dd85837c","Type":"ContainerDied","Data":"1034307c6759212b81b2db1a2e71c8ca34997940b413fee0f402925cfe9cc236"} Jan 26 13:49:33 crc kubenswrapper[4844]: I0126 13:49:33.261695 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6n5gp" event={"ID":"d5ffefc7-c4df-42f9-81e0-94c6dd85837c","Type":"ContainerDied","Data":"e3418bb990f330a3c4f247b4f5a93fb1afc963760056a85e69f3b3f44cf793d0"} Jan 26 13:49:33 crc kubenswrapper[4844]: I0126 13:49:33.261718 4844 scope.go:117] "RemoveContainer" containerID="1034307c6759212b81b2db1a2e71c8ca34997940b413fee0f402925cfe9cc236" Jan 26 13:49:33 crc kubenswrapper[4844]: I0126 13:49:33.261453 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6n5gp" Jan 26 13:49:33 crc kubenswrapper[4844]: I0126 13:49:33.264560 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fcca7d88-f1d4-463b-a412-ecfee5f8724d","Type":"ContainerStarted","Data":"a18c881be74a4f7041eaedce9d4eb010413399c97739c96fcea2fd2088426197"} Jan 26 13:49:33 crc kubenswrapper[4844]: I0126 13:49:33.284130 4844 scope.go:117] "RemoveContainer" containerID="f38f86f8b632b682720321562d0a87204f803c9eb9ee780092159714825a78c7" Jan 26 13:49:33 crc kubenswrapper[4844]: I0126 13:49:33.305467 4844 scope.go:117] "RemoveContainer" containerID="53f9dd58ef567ccc190725f2d878f6222cd142ee0aac933892eef5c2f936f4a4" Jan 26 13:49:33 crc kubenswrapper[4844]: I0126 13:49:33.354199 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6n5gp"] Jan 26 13:49:33 crc kubenswrapper[4844]: I0126 13:49:33.366872 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6n5gp"] Jan 26 13:49:33 crc kubenswrapper[4844]: I0126 13:49:33.389048 4844 scope.go:117] "RemoveContainer" containerID="1034307c6759212b81b2db1a2e71c8ca34997940b413fee0f402925cfe9cc236" Jan 26 13:49:33 crc kubenswrapper[4844]: E0126 13:49:33.389514 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1034307c6759212b81b2db1a2e71c8ca34997940b413fee0f402925cfe9cc236\": container with ID starting with 1034307c6759212b81b2db1a2e71c8ca34997940b413fee0f402925cfe9cc236 not found: ID does not exist" containerID="1034307c6759212b81b2db1a2e71c8ca34997940b413fee0f402925cfe9cc236" Jan 26 13:49:33 crc kubenswrapper[4844]: I0126 13:49:33.389547 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1034307c6759212b81b2db1a2e71c8ca34997940b413fee0f402925cfe9cc236"} err="failed to get container status \"1034307c6759212b81b2db1a2e71c8ca34997940b413fee0f402925cfe9cc236\": rpc error: code = NotFound desc = could not find container \"1034307c6759212b81b2db1a2e71c8ca34997940b413fee0f402925cfe9cc236\": container with ID starting with 1034307c6759212b81b2db1a2e71c8ca34997940b413fee0f402925cfe9cc236 not found: ID does not exist" Jan 26 13:49:33 crc kubenswrapper[4844]: I0126 13:49:33.389575 4844 scope.go:117] "RemoveContainer" containerID="f38f86f8b632b682720321562d0a87204f803c9eb9ee780092159714825a78c7" Jan 26 13:49:33 crc kubenswrapper[4844]: E0126 13:49:33.389950 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f38f86f8b632b682720321562d0a87204f803c9eb9ee780092159714825a78c7\": container with ID starting with f38f86f8b632b682720321562d0a87204f803c9eb9ee780092159714825a78c7 not found: ID does not exist" containerID="f38f86f8b632b682720321562d0a87204f803c9eb9ee780092159714825a78c7" Jan 26 13:49:33 crc kubenswrapper[4844]: I0126 13:49:33.390023 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38f86f8b632b682720321562d0a87204f803c9eb9ee780092159714825a78c7"} err="failed to get container status \"f38f86f8b632b682720321562d0a87204f803c9eb9ee780092159714825a78c7\": rpc error: code = NotFound desc = could not find container \"f38f86f8b632b682720321562d0a87204f803c9eb9ee780092159714825a78c7\": container with ID starting with f38f86f8b632b682720321562d0a87204f803c9eb9ee780092159714825a78c7 not found: ID does not exist" Jan 26 13:49:33 crc kubenswrapper[4844]: I0126 13:49:33.390058 4844 scope.go:117] "RemoveContainer" containerID="53f9dd58ef567ccc190725f2d878f6222cd142ee0aac933892eef5c2f936f4a4" Jan 26 13:49:33 crc kubenswrapper[4844]: E0126 13:49:33.390472 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53f9dd58ef567ccc190725f2d878f6222cd142ee0aac933892eef5c2f936f4a4\": container with ID starting with 53f9dd58ef567ccc190725f2d878f6222cd142ee0aac933892eef5c2f936f4a4 not found: ID does not exist" containerID="53f9dd58ef567ccc190725f2d878f6222cd142ee0aac933892eef5c2f936f4a4" Jan 26 13:49:33 crc kubenswrapper[4844]: I0126 13:49:33.390498 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53f9dd58ef567ccc190725f2d878f6222cd142ee0aac933892eef5c2f936f4a4"} err="failed to get container status \"53f9dd58ef567ccc190725f2d878f6222cd142ee0aac933892eef5c2f936f4a4\": rpc error: code = NotFound desc = could not find container \"53f9dd58ef567ccc190725f2d878f6222cd142ee0aac933892eef5c2f936f4a4\": container with ID starting with 53f9dd58ef567ccc190725f2d878f6222cd142ee0aac933892eef5c2f936f4a4 not found: ID does not exist" Jan 26 13:49:35 crc kubenswrapper[4844]: I0126 13:49:35.326295 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ffefc7-c4df-42f9-81e0-94c6dd85837c" path="/var/lib/kubelet/pods/d5ffefc7-c4df-42f9-81e0-94c6dd85837c/volumes" Jan 26 13:49:39 crc kubenswrapper[4844]: I0126 13:49:39.334064 4844 generic.go:334] "Generic (PLEG): container finished" podID="fcca7d88-f1d4-463b-a412-ecfee5f8724d" containerID="a18c881be74a4f7041eaedce9d4eb010413399c97739c96fcea2fd2088426197" exitCode=0 Jan 26 13:49:39 crc kubenswrapper[4844]: I0126 13:49:39.334158 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fcca7d88-f1d4-463b-a412-ecfee5f8724d","Type":"ContainerDied","Data":"a18c881be74a4f7041eaedce9d4eb010413399c97739c96fcea2fd2088426197"} Jan 26 13:49:40 crc kubenswrapper[4844]: I0126 13:49:40.360053 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fcca7d88-f1d4-463b-a412-ecfee5f8724d","Type":"ContainerStarted","Data":"2d246993255af503491e7457f4e57934870ed18843f2889614fdabfa3b3cd645"} Jan 26 13:49:44 crc kubenswrapper[4844]: I0126 13:49:44.422323 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fcca7d88-f1d4-463b-a412-ecfee5f8724d","Type":"ContainerStarted","Data":"b1d9ee273d69f6872037bf203adca4e490e41cbaf8a457112bf3199c0651910a"} Jan 26 13:49:44 crc kubenswrapper[4844]: I0126 13:49:44.423087 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fcca7d88-f1d4-463b-a412-ecfee5f8724d","Type":"ContainerStarted","Data":"c20b6dcefad050c37bf0419da76f763b28109d268fcbe6e3050e5fb03314afbf"} Jan 26 13:49:44 crc kubenswrapper[4844]: I0126 13:49:44.462966 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.462941723 podStartE2EDuration="18.462941723s" podCreationTimestamp="2026-01-26 13:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 13:49:44.46119988 +0000 UTC m=+3961.394567542" watchObservedRunningTime="2026-01-26 13:49:44.462941723 +0000 UTC m=+3961.396309345" Jan 26 13:49:46 crc kubenswrapper[4844]: I0126 13:49:46.576919 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:56 crc kubenswrapper[4844]: I0126 13:49:56.576910 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:56 crc kubenswrapper[4844]: I0126 13:49:56.583310 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 13:49:57 crc kubenswrapper[4844]: I0126 13:49:57.563462 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 13:50:03 crc kubenswrapper[4844]: I0126 13:50:03.757011 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f80a52fc-df6a-4218-913e-2ee03174e341" containerName="galera" probeResult="failure" output="command timed out" Jan 26 13:50:03 crc kubenswrapper[4844]: I0126 13:50:03.758003 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="f80a52fc-df6a-4218-913e-2ee03174e341" containerName="galera" probeResult="failure" output="command timed out" Jan 26 13:50:19 crc kubenswrapper[4844]: I0126 13:50:19.974393 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 13:50:19 crc kubenswrapper[4844]: E0126 13:50:19.976014 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ffefc7-c4df-42f9-81e0-94c6dd85837c" containerName="extract-utilities" Jan 26 13:50:19 crc kubenswrapper[4844]: I0126 13:50:19.976050 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ffefc7-c4df-42f9-81e0-94c6dd85837c" containerName="extract-utilities" Jan 26 13:50:19 crc kubenswrapper[4844]: E0126 13:50:19.976118 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ffefc7-c4df-42f9-81e0-94c6dd85837c" containerName="registry-server" Jan 26 13:50:19 crc kubenswrapper[4844]: I0126 13:50:19.976139 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ffefc7-c4df-42f9-81e0-94c6dd85837c" containerName="registry-server" Jan 26 13:50:19 crc kubenswrapper[4844]: E0126 13:50:19.976194 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ffefc7-c4df-42f9-81e0-94c6dd85837c" containerName="extract-content" Jan 26 13:50:19 crc kubenswrapper[4844]: I0126 13:50:19.976214 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ffefc7-c4df-42f9-81e0-94c6dd85837c" containerName="extract-content" Jan 26 13:50:19 crc kubenswrapper[4844]: I0126 13:50:19.976708 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ffefc7-c4df-42f9-81e0-94c6dd85837c" containerName="registry-server" Jan 26 13:50:19 crc kubenswrapper[4844]: I0126 13:50:19.978364 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 13:50:19 crc kubenswrapper[4844]: I0126 13:50:19.982767 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 26 13:50:19 crc kubenswrapper[4844]: I0126 13:50:19.982958 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 26 13:50:19 crc kubenswrapper[4844]: I0126 13:50:19.982988 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-j2592" Jan 26 13:50:19 crc kubenswrapper[4844]: I0126 13:50:19.983170 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 13:50:19 crc kubenswrapper[4844]: I0126 13:50:19.990709 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.073865 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f617457c-8f1e-4508-926e-bb6b77ea7444-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.073966 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f617457c-8f1e-4508-926e-bb6b77ea7444-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.074022 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f617457c-8f1e-4508-926e-bb6b77ea7444-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.074155 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f617457c-8f1e-4508-926e-bb6b77ea7444-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.074220 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trrvz\" (UniqueName: \"kubernetes.io/projected/f617457c-8f1e-4508-926e-bb6b77ea7444-kube-api-access-trrvz\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.074330 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.074374 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f617457c-8f1e-4508-926e-bb6b77ea7444-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.074690 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f617457c-8f1e-4508-926e-bb6b77ea7444-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.074765 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f617457c-8f1e-4508-926e-bb6b77ea7444-config-data\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.177151 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f617457c-8f1e-4508-926e-bb6b77ea7444-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.177219 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trrvz\" (UniqueName: \"kubernetes.io/projected/f617457c-8f1e-4508-926e-bb6b77ea7444-kube-api-access-trrvz\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.177291 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.177317 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f617457c-8f1e-4508-926e-bb6b77ea7444-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.177396 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f617457c-8f1e-4508-926e-bb6b77ea7444-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.177432 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f617457c-8f1e-4508-926e-bb6b77ea7444-config-data\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.177492 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f617457c-8f1e-4508-926e-bb6b77ea7444-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.177528 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f617457c-8f1e-4508-926e-bb6b77ea7444-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.177556 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f617457c-8f1e-4508-926e-bb6b77ea7444-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.177972 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.178685 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f617457c-8f1e-4508-926e-bb6b77ea7444-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.179434 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f617457c-8f1e-4508-926e-bb6b77ea7444-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.179803 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f617457c-8f1e-4508-926e-bb6b77ea7444-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.180055 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f617457c-8f1e-4508-926e-bb6b77ea7444-config-data\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.187299 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f617457c-8f1e-4508-926e-bb6b77ea7444-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.188043 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f617457c-8f1e-4508-926e-bb6b77ea7444-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.190520 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f617457c-8f1e-4508-926e-bb6b77ea7444-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.203242 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trrvz\" (UniqueName: \"kubernetes.io/projected/f617457c-8f1e-4508-926e-bb6b77ea7444-kube-api-access-trrvz\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.240943 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.309687 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 13:50:20 crc kubenswrapper[4844]: I0126 13:50:20.875542 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 13:50:20 crc kubenswrapper[4844]: W0126 13:50:20.881631 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf617457c_8f1e_4508_926e_bb6b77ea7444.slice/crio-4cbc8cbd3237ba23738eb4e3e827c47fd792e471d4c4100dceada17ef6fcdb90 WatchSource:0}: Error finding container 4cbc8cbd3237ba23738eb4e3e827c47fd792e471d4c4100dceada17ef6fcdb90: Status 404 returned error can't find the container with id 4cbc8cbd3237ba23738eb4e3e827c47fd792e471d4c4100dceada17ef6fcdb90 Jan 26 13:50:21 crc kubenswrapper[4844]: I0126 13:50:21.812449 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f617457c-8f1e-4508-926e-bb6b77ea7444","Type":"ContainerStarted","Data":"4cbc8cbd3237ba23738eb4e3e827c47fd792e471d4c4100dceada17ef6fcdb90"} Jan 26 13:50:32 crc kubenswrapper[4844]: I0126 13:50:32.761961 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 13:50:32 crc kubenswrapper[4844]: I0126 13:50:32.761972 4844 patch_prober.go:28] interesting pod/router-default-5444994796-9pkgp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 13:50:32 crc kubenswrapper[4844]: I0126 13:50:32.763796 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 13:50:32 crc kubenswrapper[4844]: I0126 13:50:32.763873 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-9pkgp" podUID="46a01ba7-7357-471a-ae59-95361f2ce7ba" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 13:50:35 crc kubenswrapper[4844]: I0126 13:50:35.970476 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f617457c-8f1e-4508-926e-bb6b77ea7444","Type":"ContainerStarted","Data":"16c2280421c445b588fa8215f65a400cc022d8f73da61eb52339462ea12392b6"} Jan 26 13:50:36 crc kubenswrapper[4844]: I0126 13:50:36.019773 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.808301214 podStartE2EDuration="18.019742184s" podCreationTimestamp="2026-01-26 13:50:18 +0000 UTC" firstStartedPulling="2026-01-26 13:50:20.883815712 +0000 UTC m=+3997.817183324" lastFinishedPulling="2026-01-26 13:50:34.095256682 +0000 UTC m=+4011.028624294" observedRunningTime="2026-01-26 13:50:35.996639373 +0000 UTC m=+4012.930007015" watchObservedRunningTime="2026-01-26 13:50:36.019742184 +0000 UTC m=+4012.953109836" Jan 26 13:51:36 crc kubenswrapper[4844]: I0126 13:51:36.364425 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:51:36 crc kubenswrapper[4844]: I0126 13:51:36.364988 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:52:06 crc kubenswrapper[4844]: I0126 13:52:06.364715 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:52:06 crc kubenswrapper[4844]: I0126 13:52:06.365316 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:52:36 crc kubenswrapper[4844]: I0126 13:52:36.365180 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:52:36 crc kubenswrapper[4844]: I0126 13:52:36.365825 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:52:36 crc kubenswrapper[4844]: I0126 13:52:36.365885 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 13:52:36 crc kubenswrapper[4844]: I0126 13:52:36.366918 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e6f2d77087958f205aeeab162bc40d9fca5be66573603444ac53e2983274b58"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:52:36 crc kubenswrapper[4844]: I0126 13:52:36.366993 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://7e6f2d77087958f205aeeab162bc40d9fca5be66573603444ac53e2983274b58" gracePeriod=600 Jan 26 13:52:36 crc kubenswrapper[4844]: I0126 13:52:36.664193 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="7e6f2d77087958f205aeeab162bc40d9fca5be66573603444ac53e2983274b58" exitCode=0 Jan 26 13:52:36 crc kubenswrapper[4844]: I0126 13:52:36.664453 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"7e6f2d77087958f205aeeab162bc40d9fca5be66573603444ac53e2983274b58"} Jan 26 13:52:36 crc kubenswrapper[4844]: I0126 13:52:36.664746 4844 scope.go:117] "RemoveContainer" containerID="0424fae40a4aceb05ea41676053577c321c6723fd5d4fe32f9b0937d2a5632b0" Jan 26 13:52:37 crc kubenswrapper[4844]: I0126 13:52:37.684133 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b"} Jan 26 13:53:34 crc kubenswrapper[4844]: I0126 13:53:34.540581 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g7kcg"] Jan 26 13:53:34 crc kubenswrapper[4844]: I0126 13:53:34.548797 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7kcg" Jan 26 13:53:34 crc kubenswrapper[4844]: I0126 13:53:34.572290 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7kcg"] Jan 26 13:53:34 crc kubenswrapper[4844]: I0126 13:53:34.665031 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-686jc\" (UniqueName: \"kubernetes.io/projected/2b5bff50-8d05-494b-9fd7-fce2978f5c98-kube-api-access-686jc\") pod \"redhat-marketplace-g7kcg\" (UID: \"2b5bff50-8d05-494b-9fd7-fce2978f5c98\") " pod="openshift-marketplace/redhat-marketplace-g7kcg" Jan 26 13:53:34 crc kubenswrapper[4844]: I0126 13:53:34.665188 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b5bff50-8d05-494b-9fd7-fce2978f5c98-catalog-content\") pod \"redhat-marketplace-g7kcg\" (UID: \"2b5bff50-8d05-494b-9fd7-fce2978f5c98\") " pod="openshift-marketplace/redhat-marketplace-g7kcg" Jan 26 13:53:34 crc kubenswrapper[4844]: I0126 13:53:34.665234 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b5bff50-8d05-494b-9fd7-fce2978f5c98-utilities\") pod \"redhat-marketplace-g7kcg\" (UID: \"2b5bff50-8d05-494b-9fd7-fce2978f5c98\") " pod="openshift-marketplace/redhat-marketplace-g7kcg" Jan 26 13:53:34 crc kubenswrapper[4844]: I0126 13:53:34.766887 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-686jc\" (UniqueName: \"kubernetes.io/projected/2b5bff50-8d05-494b-9fd7-fce2978f5c98-kube-api-access-686jc\") pod \"redhat-marketplace-g7kcg\" (UID: \"2b5bff50-8d05-494b-9fd7-fce2978f5c98\") " pod="openshift-marketplace/redhat-marketplace-g7kcg" Jan 26 13:53:34 crc kubenswrapper[4844]: I0126 13:53:34.767050 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b5bff50-8d05-494b-9fd7-fce2978f5c98-catalog-content\") pod \"redhat-marketplace-g7kcg\" (UID: \"2b5bff50-8d05-494b-9fd7-fce2978f5c98\") " pod="openshift-marketplace/redhat-marketplace-g7kcg" Jan 26 13:53:34 crc kubenswrapper[4844]: I0126 13:53:34.767083 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b5bff50-8d05-494b-9fd7-fce2978f5c98-utilities\") pod \"redhat-marketplace-g7kcg\" (UID: \"2b5bff50-8d05-494b-9fd7-fce2978f5c98\") " pod="openshift-marketplace/redhat-marketplace-g7kcg" Jan 26 13:53:34 crc kubenswrapper[4844]: I0126 13:53:34.767611 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b5bff50-8d05-494b-9fd7-fce2978f5c98-utilities\") pod \"redhat-marketplace-g7kcg\" (UID: \"2b5bff50-8d05-494b-9fd7-fce2978f5c98\") " pod="openshift-marketplace/redhat-marketplace-g7kcg" Jan 26 13:53:34 crc kubenswrapper[4844]: I0126 13:53:34.768138 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b5bff50-8d05-494b-9fd7-fce2978f5c98-catalog-content\") pod \"redhat-marketplace-g7kcg\" (UID: \"2b5bff50-8d05-494b-9fd7-fce2978f5c98\") " pod="openshift-marketplace/redhat-marketplace-g7kcg" Jan 26 13:53:34 crc kubenswrapper[4844]: I0126 13:53:34.788014 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-686jc\" (UniqueName: \"kubernetes.io/projected/2b5bff50-8d05-494b-9fd7-fce2978f5c98-kube-api-access-686jc\") pod \"redhat-marketplace-g7kcg\" (UID: \"2b5bff50-8d05-494b-9fd7-fce2978f5c98\") " pod="openshift-marketplace/redhat-marketplace-g7kcg" Jan 26 13:53:34 crc kubenswrapper[4844]: I0126 13:53:34.896715 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7kcg" Jan 26 13:53:35 crc kubenswrapper[4844]: I0126 13:53:35.425824 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7kcg"] Jan 26 13:53:36 crc kubenswrapper[4844]: I0126 13:53:36.360366 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7kcg" event={"ID":"2b5bff50-8d05-494b-9fd7-fce2978f5c98","Type":"ContainerStarted","Data":"8b54056b1a5a087a982d7342d54e000e5e981270ae95dfe7d315c7cd41d906c6"} Jan 26 13:53:36 crc kubenswrapper[4844]: I0126 13:53:36.360858 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7kcg" event={"ID":"2b5bff50-8d05-494b-9fd7-fce2978f5c98","Type":"ContainerStarted","Data":"cb1c4baaff0d706f8312aa291013b700fcb9f4321db63ebe539200f49733ce33"} Jan 26 13:53:37 crc kubenswrapper[4844]: I0126 13:53:37.380732 4844 generic.go:334] "Generic (PLEG): container finished" podID="2b5bff50-8d05-494b-9fd7-fce2978f5c98" containerID="8b54056b1a5a087a982d7342d54e000e5e981270ae95dfe7d315c7cd41d906c6" exitCode=0 Jan 26 13:53:37 crc kubenswrapper[4844]: I0126 13:53:37.381020 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7kcg" event={"ID":"2b5bff50-8d05-494b-9fd7-fce2978f5c98","Type":"ContainerDied","Data":"8b54056b1a5a087a982d7342d54e000e5e981270ae95dfe7d315c7cd41d906c6"} Jan 26 13:53:37 crc kubenswrapper[4844]: I0126 13:53:37.386841 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 13:53:38 crc kubenswrapper[4844]: I0126 13:53:38.395615 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7kcg" event={"ID":"2b5bff50-8d05-494b-9fd7-fce2978f5c98","Type":"ContainerStarted","Data":"36c25f32cdbf65b7fdac7c7784925e3b182137bd84fda610430fa68b417c4ad7"} Jan 26 13:53:39 crc kubenswrapper[4844]: I0126 13:53:39.407972 4844 generic.go:334] "Generic (PLEG): container finished" podID="2b5bff50-8d05-494b-9fd7-fce2978f5c98" containerID="36c25f32cdbf65b7fdac7c7784925e3b182137bd84fda610430fa68b417c4ad7" exitCode=0 Jan 26 13:53:39 crc kubenswrapper[4844]: I0126 13:53:39.408032 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7kcg" event={"ID":"2b5bff50-8d05-494b-9fd7-fce2978f5c98","Type":"ContainerDied","Data":"36c25f32cdbf65b7fdac7c7784925e3b182137bd84fda610430fa68b417c4ad7"} Jan 26 13:53:40 crc kubenswrapper[4844]: I0126 13:53:40.417633 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7kcg" event={"ID":"2b5bff50-8d05-494b-9fd7-fce2978f5c98","Type":"ContainerStarted","Data":"3b1d1e0c3df36dc3e0c4c2f2545cfa8a082d826b1686258c22dffe97ed929c5a"} Jan 26 13:53:40 crc kubenswrapper[4844]: I0126 13:53:40.446116 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g7kcg" podStartSLOduration=3.770300217 podStartE2EDuration="6.44609668s" podCreationTimestamp="2026-01-26 13:53:34 +0000 UTC" firstStartedPulling="2026-01-26 13:53:37.386211733 +0000 UTC m=+4194.319579385" lastFinishedPulling="2026-01-26 13:53:40.062008236 +0000 UTC m=+4196.995375848" observedRunningTime="2026-01-26 13:53:40.440686989 +0000 UTC m=+4197.374054601" watchObservedRunningTime="2026-01-26 13:53:40.44609668 +0000 UTC m=+4197.379464292" Jan 26 13:53:44 crc kubenswrapper[4844]: I0126 13:53:44.896916 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g7kcg" Jan 26 13:53:44 crc kubenswrapper[4844]: I0126 13:53:44.897681 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g7kcg" Jan 26 13:53:44 crc kubenswrapper[4844]: I0126 13:53:44.994395 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g7kcg" Jan 26 13:53:45 crc kubenswrapper[4844]: I0126 13:53:45.997055 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g7kcg" Jan 26 13:53:46 crc kubenswrapper[4844]: I0126 13:53:46.055487 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7kcg"] Jan 26 13:53:47 crc kubenswrapper[4844]: I0126 13:53:47.494502 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g7kcg" podUID="2b5bff50-8d05-494b-9fd7-fce2978f5c98" containerName="registry-server" containerID="cri-o://3b1d1e0c3df36dc3e0c4c2f2545cfa8a082d826b1686258c22dffe97ed929c5a" gracePeriod=2 Jan 26 13:53:48 crc kubenswrapper[4844]: I0126 13:53:48.509797 4844 generic.go:334] "Generic (PLEG): container finished" podID="2b5bff50-8d05-494b-9fd7-fce2978f5c98" containerID="3b1d1e0c3df36dc3e0c4c2f2545cfa8a082d826b1686258c22dffe97ed929c5a" exitCode=0 Jan 26 13:53:48 crc kubenswrapper[4844]: I0126 13:53:48.509863 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7kcg" event={"ID":"2b5bff50-8d05-494b-9fd7-fce2978f5c98","Type":"ContainerDied","Data":"3b1d1e0c3df36dc3e0c4c2f2545cfa8a082d826b1686258c22dffe97ed929c5a"} Jan 26 13:53:49 crc kubenswrapper[4844]: I0126 13:53:49.529373 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7kcg" event={"ID":"2b5bff50-8d05-494b-9fd7-fce2978f5c98","Type":"ContainerDied","Data":"cb1c4baaff0d706f8312aa291013b700fcb9f4321db63ebe539200f49733ce33"} Jan 26 13:53:49 crc kubenswrapper[4844]: I0126 13:53:49.530823 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb1c4baaff0d706f8312aa291013b700fcb9f4321db63ebe539200f49733ce33" Jan 26 13:53:49 crc kubenswrapper[4844]: I0126 13:53:49.532488 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7kcg" Jan 26 13:53:49 crc kubenswrapper[4844]: I0126 13:53:49.619091 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b5bff50-8d05-494b-9fd7-fce2978f5c98-utilities\") pod \"2b5bff50-8d05-494b-9fd7-fce2978f5c98\" (UID: \"2b5bff50-8d05-494b-9fd7-fce2978f5c98\") " Jan 26 13:53:49 crc kubenswrapper[4844]: I0126 13:53:49.619221 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-686jc\" (UniqueName: \"kubernetes.io/projected/2b5bff50-8d05-494b-9fd7-fce2978f5c98-kube-api-access-686jc\") pod \"2b5bff50-8d05-494b-9fd7-fce2978f5c98\" (UID: \"2b5bff50-8d05-494b-9fd7-fce2978f5c98\") " Jan 26 13:53:49 crc kubenswrapper[4844]: I0126 13:53:49.619322 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b5bff50-8d05-494b-9fd7-fce2978f5c98-catalog-content\") pod \"2b5bff50-8d05-494b-9fd7-fce2978f5c98\" (UID: \"2b5bff50-8d05-494b-9fd7-fce2978f5c98\") " Jan 26 13:53:49 crc kubenswrapper[4844]: I0126 13:53:49.620499 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b5bff50-8d05-494b-9fd7-fce2978f5c98-utilities" (OuterVolumeSpecName: "utilities") pod "2b5bff50-8d05-494b-9fd7-fce2978f5c98" (UID: "2b5bff50-8d05-494b-9fd7-fce2978f5c98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:53:49 crc kubenswrapper[4844]: I0126 13:53:49.626011 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b5bff50-8d05-494b-9fd7-fce2978f5c98-kube-api-access-686jc" (OuterVolumeSpecName: "kube-api-access-686jc") pod "2b5bff50-8d05-494b-9fd7-fce2978f5c98" (UID: "2b5bff50-8d05-494b-9fd7-fce2978f5c98"). InnerVolumeSpecName "kube-api-access-686jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:53:49 crc kubenswrapper[4844]: I0126 13:53:49.645566 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b5bff50-8d05-494b-9fd7-fce2978f5c98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b5bff50-8d05-494b-9fd7-fce2978f5c98" (UID: "2b5bff50-8d05-494b-9fd7-fce2978f5c98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:53:49 crc kubenswrapper[4844]: I0126 13:53:49.739633 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b5bff50-8d05-494b-9fd7-fce2978f5c98-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:53:49 crc kubenswrapper[4844]: I0126 13:53:49.739674 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-686jc\" (UniqueName: \"kubernetes.io/projected/2b5bff50-8d05-494b-9fd7-fce2978f5c98-kube-api-access-686jc\") on node \"crc\" DevicePath \"\"" Jan 26 13:53:49 crc kubenswrapper[4844]: I0126 13:53:49.739684 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b5bff50-8d05-494b-9fd7-fce2978f5c98-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:53:50 crc kubenswrapper[4844]: I0126 13:53:50.537895 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7kcg" Jan 26 13:53:50 crc kubenswrapper[4844]: I0126 13:53:50.574529 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7kcg"] Jan 26 13:53:50 crc kubenswrapper[4844]: I0126 13:53:50.586384 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7kcg"] Jan 26 13:53:51 crc kubenswrapper[4844]: I0126 13:53:51.334868 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b5bff50-8d05-494b-9fd7-fce2978f5c98" path="/var/lib/kubelet/pods/2b5bff50-8d05-494b-9fd7-fce2978f5c98/volumes" Jan 26 13:54:36 crc kubenswrapper[4844]: I0126 13:54:36.364395 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:54:36 crc kubenswrapper[4844]: I0126 13:54:36.365001 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:54:48 crc kubenswrapper[4844]: I0126 13:54:48.020825 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-59ccf49fff-tmmnh" podUID="03a2059f-ed6b-49f5-9476-bf21d424567f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.52:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 13:55:06 crc kubenswrapper[4844]: I0126 13:55:06.364364 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:55:06 crc kubenswrapper[4844]: I0126 13:55:06.365051 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:55:36 crc kubenswrapper[4844]: I0126 13:55:36.364782 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 13:55:36 crc kubenswrapper[4844]: I0126 13:55:36.365319 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 13:55:36 crc kubenswrapper[4844]: I0126 13:55:36.365361 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 13:55:36 crc kubenswrapper[4844]: I0126 13:55:36.366165 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 13:55:36 crc kubenswrapper[4844]: I0126 13:55:36.366219 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" gracePeriod=600 Jan 26 13:55:36 crc kubenswrapper[4844]: E0126 13:55:36.491904 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:55:36 crc kubenswrapper[4844]: I0126 13:55:36.701475 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" exitCode=0 Jan 26 13:55:36 crc kubenswrapper[4844]: I0126 13:55:36.701529 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b"} Jan 26 13:55:36 crc kubenswrapper[4844]: I0126 13:55:36.701570 4844 scope.go:117] "RemoveContainer" containerID="7e6f2d77087958f205aeeab162bc40d9fca5be66573603444ac53e2983274b58" Jan 26 13:55:36 crc kubenswrapper[4844]: I0126 13:55:36.702452 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:55:36 crc kubenswrapper[4844]: E0126 13:55:36.702786 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:55:51 crc kubenswrapper[4844]: I0126 13:55:51.313347 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:55:51 crc kubenswrapper[4844]: E0126 13:55:51.314247 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:56:04 crc kubenswrapper[4844]: I0126 13:56:04.315843 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:56:04 crc kubenswrapper[4844]: E0126 13:56:04.317146 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:56:14 crc kubenswrapper[4844]: I0126 13:56:14.925823 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-65lql"] Jan 26 13:56:14 crc kubenswrapper[4844]: E0126 13:56:14.927655 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5bff50-8d05-494b-9fd7-fce2978f5c98" containerName="extract-content" Jan 26 13:56:14 crc kubenswrapper[4844]: I0126 13:56:14.927738 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5bff50-8d05-494b-9fd7-fce2978f5c98" containerName="extract-content" Jan 26 13:56:14 crc kubenswrapper[4844]: E0126 13:56:14.927814 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5bff50-8d05-494b-9fd7-fce2978f5c98" containerName="extract-utilities" Jan 26 13:56:14 crc kubenswrapper[4844]: I0126 13:56:14.927867 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5bff50-8d05-494b-9fd7-fce2978f5c98" containerName="extract-utilities" Jan 26 13:56:14 crc kubenswrapper[4844]: E0126 13:56:14.927922 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5bff50-8d05-494b-9fd7-fce2978f5c98" containerName="registry-server" Jan 26 13:56:14 crc kubenswrapper[4844]: I0126 13:56:14.928418 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5bff50-8d05-494b-9fd7-fce2978f5c98" containerName="registry-server" Jan 26 13:56:14 crc kubenswrapper[4844]: I0126 13:56:14.928731 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b5bff50-8d05-494b-9fd7-fce2978f5c98" containerName="registry-server" Jan 26 13:56:14 crc kubenswrapper[4844]: I0126 13:56:14.930928 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65lql" Jan 26 13:56:14 crc kubenswrapper[4844]: I0126 13:56:14.962689 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-65lql"] Jan 26 13:56:15 crc kubenswrapper[4844]: I0126 13:56:15.045451 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f32e298-5b32-4eda-9772-6466ef5f3596-utilities\") pod \"redhat-operators-65lql\" (UID: \"0f32e298-5b32-4eda-9772-6466ef5f3596\") " pod="openshift-marketplace/redhat-operators-65lql" Jan 26 13:56:15 crc kubenswrapper[4844]: I0126 13:56:15.045588 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sstl\" (UniqueName: \"kubernetes.io/projected/0f32e298-5b32-4eda-9772-6466ef5f3596-kube-api-access-8sstl\") pod \"redhat-operators-65lql\" (UID: \"0f32e298-5b32-4eda-9772-6466ef5f3596\") " pod="openshift-marketplace/redhat-operators-65lql" Jan 26 13:56:15 crc kubenswrapper[4844]: I0126 13:56:15.045789 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f32e298-5b32-4eda-9772-6466ef5f3596-catalog-content\") pod \"redhat-operators-65lql\" (UID: \"0f32e298-5b32-4eda-9772-6466ef5f3596\") " pod="openshift-marketplace/redhat-operators-65lql" Jan 26 13:56:15 crc kubenswrapper[4844]: I0126 13:56:15.147674 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f32e298-5b32-4eda-9772-6466ef5f3596-utilities\") pod \"redhat-operators-65lql\" (UID: \"0f32e298-5b32-4eda-9772-6466ef5f3596\") " pod="openshift-marketplace/redhat-operators-65lql" Jan 26 13:56:15 crc kubenswrapper[4844]: I0126 13:56:15.147757 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sstl\" (UniqueName: \"kubernetes.io/projected/0f32e298-5b32-4eda-9772-6466ef5f3596-kube-api-access-8sstl\") pod \"redhat-operators-65lql\" (UID: \"0f32e298-5b32-4eda-9772-6466ef5f3596\") " pod="openshift-marketplace/redhat-operators-65lql" Jan 26 13:56:15 crc kubenswrapper[4844]: I0126 13:56:15.147866 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f32e298-5b32-4eda-9772-6466ef5f3596-catalog-content\") pod \"redhat-operators-65lql\" (UID: \"0f32e298-5b32-4eda-9772-6466ef5f3596\") " pod="openshift-marketplace/redhat-operators-65lql" Jan 26 13:56:15 crc kubenswrapper[4844]: I0126 13:56:15.148492 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f32e298-5b32-4eda-9772-6466ef5f3596-utilities\") pod \"redhat-operators-65lql\" (UID: \"0f32e298-5b32-4eda-9772-6466ef5f3596\") " pod="openshift-marketplace/redhat-operators-65lql" Jan 26 13:56:15 crc kubenswrapper[4844]: I0126 13:56:15.148511 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f32e298-5b32-4eda-9772-6466ef5f3596-catalog-content\") pod \"redhat-operators-65lql\" (UID: \"0f32e298-5b32-4eda-9772-6466ef5f3596\") " pod="openshift-marketplace/redhat-operators-65lql" Jan 26 13:56:15 crc kubenswrapper[4844]: I0126 13:56:15.185122 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sstl\" (UniqueName: \"kubernetes.io/projected/0f32e298-5b32-4eda-9772-6466ef5f3596-kube-api-access-8sstl\") pod \"redhat-operators-65lql\" (UID: \"0f32e298-5b32-4eda-9772-6466ef5f3596\") " pod="openshift-marketplace/redhat-operators-65lql" Jan 26 13:56:15 crc kubenswrapper[4844]: I0126 13:56:15.272420 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65lql" Jan 26 13:56:15 crc kubenswrapper[4844]: I0126 13:56:15.753512 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-65lql"] Jan 26 13:56:16 crc kubenswrapper[4844]: I0126 13:56:16.146483 4844 generic.go:334] "Generic (PLEG): container finished" podID="0f32e298-5b32-4eda-9772-6466ef5f3596" containerID="b38fabc56cb1d6525ee25c65ee1377abd544737231f74ea4c04ac52f45352277" exitCode=0 Jan 26 13:56:16 crc kubenswrapper[4844]: I0126 13:56:16.146536 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65lql" event={"ID":"0f32e298-5b32-4eda-9772-6466ef5f3596","Type":"ContainerDied","Data":"b38fabc56cb1d6525ee25c65ee1377abd544737231f74ea4c04ac52f45352277"} Jan 26 13:56:16 crc kubenswrapper[4844]: I0126 13:56:16.146766 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65lql" event={"ID":"0f32e298-5b32-4eda-9772-6466ef5f3596","Type":"ContainerStarted","Data":"3b256cdec617106a21d80899e8a59c7ce7a691227c6bea99892e00355e4ad847"} Jan 26 13:56:17 crc kubenswrapper[4844]: I0126 13:56:17.316551 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:56:17 crc kubenswrapper[4844]: E0126 13:56:17.317484 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:56:18 crc kubenswrapper[4844]: I0126 13:56:18.183074 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65lql" event={"ID":"0f32e298-5b32-4eda-9772-6466ef5f3596","Type":"ContainerStarted","Data":"388c4bbb8c5ac3bffe4a9d48faa2e81902207475bea1293c2ff3ebe4d19714fb"} Jan 26 13:56:22 crc kubenswrapper[4844]: I0126 13:56:22.224347 4844 generic.go:334] "Generic (PLEG): container finished" podID="0f32e298-5b32-4eda-9772-6466ef5f3596" containerID="388c4bbb8c5ac3bffe4a9d48faa2e81902207475bea1293c2ff3ebe4d19714fb" exitCode=0 Jan 26 13:56:22 crc kubenswrapper[4844]: I0126 13:56:22.224431 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65lql" event={"ID":"0f32e298-5b32-4eda-9772-6466ef5f3596","Type":"ContainerDied","Data":"388c4bbb8c5ac3bffe4a9d48faa2e81902207475bea1293c2ff3ebe4d19714fb"} Jan 26 13:56:23 crc kubenswrapper[4844]: I0126 13:56:23.239974 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65lql" event={"ID":"0f32e298-5b32-4eda-9772-6466ef5f3596","Type":"ContainerStarted","Data":"b24f70199f0c88c52865823f5dcf243b380c0cff10f227c0609020df0cb9099a"} Jan 26 13:56:23 crc kubenswrapper[4844]: I0126 13:56:23.272930 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-65lql" podStartSLOduration=2.784027068 podStartE2EDuration="9.272912254s" podCreationTimestamp="2026-01-26 13:56:14 +0000 UTC" firstStartedPulling="2026-01-26 13:56:16.164101572 +0000 UTC m=+4353.097469184" lastFinishedPulling="2026-01-26 13:56:22.652986758 +0000 UTC m=+4359.586354370" observedRunningTime="2026-01-26 13:56:23.268264912 +0000 UTC m=+4360.201632534" watchObservedRunningTime="2026-01-26 13:56:23.272912254 +0000 UTC m=+4360.206279866" Jan 26 13:56:25 crc kubenswrapper[4844]: I0126 13:56:25.272692 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-65lql" Jan 26 13:56:25 crc kubenswrapper[4844]: I0126 13:56:25.273151 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-65lql" Jan 26 13:56:26 crc kubenswrapper[4844]: I0126 13:56:26.332430 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-65lql" podUID="0f32e298-5b32-4eda-9772-6466ef5f3596" containerName="registry-server" probeResult="failure" output=< Jan 26 13:56:26 crc kubenswrapper[4844]: timeout: failed to connect service ":50051" within 1s Jan 26 13:56:26 crc kubenswrapper[4844]: > Jan 26 13:56:30 crc kubenswrapper[4844]: I0126 13:56:30.317413 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:56:30 crc kubenswrapper[4844]: E0126 13:56:30.318494 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:56:35 crc kubenswrapper[4844]: I0126 13:56:35.364082 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-65lql" Jan 26 13:56:35 crc kubenswrapper[4844]: I0126 13:56:35.420455 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-65lql" Jan 26 13:56:37 crc kubenswrapper[4844]: I0126 13:56:37.513820 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-65lql"] Jan 26 13:56:37 crc kubenswrapper[4844]: I0126 13:56:37.514312 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-65lql" podUID="0f32e298-5b32-4eda-9772-6466ef5f3596" containerName="registry-server" containerID="cri-o://b24f70199f0c88c52865823f5dcf243b380c0cff10f227c0609020df0cb9099a" gracePeriod=2 Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.014080 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65lql" Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.030330 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f32e298-5b32-4eda-9772-6466ef5f3596-catalog-content\") pod \"0f32e298-5b32-4eda-9772-6466ef5f3596\" (UID: \"0f32e298-5b32-4eda-9772-6466ef5f3596\") " Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.030459 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sstl\" (UniqueName: \"kubernetes.io/projected/0f32e298-5b32-4eda-9772-6466ef5f3596-kube-api-access-8sstl\") pod \"0f32e298-5b32-4eda-9772-6466ef5f3596\" (UID: \"0f32e298-5b32-4eda-9772-6466ef5f3596\") " Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.030528 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f32e298-5b32-4eda-9772-6466ef5f3596-utilities\") pod \"0f32e298-5b32-4eda-9772-6466ef5f3596\" (UID: \"0f32e298-5b32-4eda-9772-6466ef5f3596\") " Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.033838 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f32e298-5b32-4eda-9772-6466ef5f3596-utilities" (OuterVolumeSpecName: "utilities") pod "0f32e298-5b32-4eda-9772-6466ef5f3596" (UID: "0f32e298-5b32-4eda-9772-6466ef5f3596"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.047887 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f32e298-5b32-4eda-9772-6466ef5f3596-kube-api-access-8sstl" (OuterVolumeSpecName: "kube-api-access-8sstl") pod "0f32e298-5b32-4eda-9772-6466ef5f3596" (UID: "0f32e298-5b32-4eda-9772-6466ef5f3596"). InnerVolumeSpecName "kube-api-access-8sstl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.134989 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f32e298-5b32-4eda-9772-6466ef5f3596-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.135041 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sstl\" (UniqueName: \"kubernetes.io/projected/0f32e298-5b32-4eda-9772-6466ef5f3596-kube-api-access-8sstl\") on node \"crc\" DevicePath \"\"" Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.170548 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f32e298-5b32-4eda-9772-6466ef5f3596-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f32e298-5b32-4eda-9772-6466ef5f3596" (UID: "0f32e298-5b32-4eda-9772-6466ef5f3596"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.237006 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f32e298-5b32-4eda-9772-6466ef5f3596-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.418822 4844 generic.go:334] "Generic (PLEG): container finished" podID="0f32e298-5b32-4eda-9772-6466ef5f3596" containerID="b24f70199f0c88c52865823f5dcf243b380c0cff10f227c0609020df0cb9099a" exitCode=0 Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.418860 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65lql" event={"ID":"0f32e298-5b32-4eda-9772-6466ef5f3596","Type":"ContainerDied","Data":"b24f70199f0c88c52865823f5dcf243b380c0cff10f227c0609020df0cb9099a"} Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.418884 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65lql" event={"ID":"0f32e298-5b32-4eda-9772-6466ef5f3596","Type":"ContainerDied","Data":"3b256cdec617106a21d80899e8a59c7ce7a691227c6bea99892e00355e4ad847"} Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.418895 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65lql" Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.418900 4844 scope.go:117] "RemoveContainer" containerID="b24f70199f0c88c52865823f5dcf243b380c0cff10f227c0609020df0cb9099a" Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.463410 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-65lql"] Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.466285 4844 scope.go:117] "RemoveContainer" containerID="388c4bbb8c5ac3bffe4a9d48faa2e81902207475bea1293c2ff3ebe4d19714fb" Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.469043 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-65lql"] Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.498260 4844 scope.go:117] "RemoveContainer" containerID="b38fabc56cb1d6525ee25c65ee1377abd544737231f74ea4c04ac52f45352277" Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.564693 4844 scope.go:117] "RemoveContainer" containerID="b24f70199f0c88c52865823f5dcf243b380c0cff10f227c0609020df0cb9099a" Jan 26 13:56:38 crc kubenswrapper[4844]: E0126 13:56:38.565260 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b24f70199f0c88c52865823f5dcf243b380c0cff10f227c0609020df0cb9099a\": container with ID starting with b24f70199f0c88c52865823f5dcf243b380c0cff10f227c0609020df0cb9099a not found: ID does not exist" containerID="b24f70199f0c88c52865823f5dcf243b380c0cff10f227c0609020df0cb9099a" Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.565296 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b24f70199f0c88c52865823f5dcf243b380c0cff10f227c0609020df0cb9099a"} err="failed to get container status \"b24f70199f0c88c52865823f5dcf243b380c0cff10f227c0609020df0cb9099a\": rpc error: code = NotFound desc = could not find container \"b24f70199f0c88c52865823f5dcf243b380c0cff10f227c0609020df0cb9099a\": container with ID starting with b24f70199f0c88c52865823f5dcf243b380c0cff10f227c0609020df0cb9099a not found: ID does not exist" Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.565321 4844 scope.go:117] "RemoveContainer" containerID="388c4bbb8c5ac3bffe4a9d48faa2e81902207475bea1293c2ff3ebe4d19714fb" Jan 26 13:56:38 crc kubenswrapper[4844]: E0126 13:56:38.565753 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"388c4bbb8c5ac3bffe4a9d48faa2e81902207475bea1293c2ff3ebe4d19714fb\": container with ID starting with 388c4bbb8c5ac3bffe4a9d48faa2e81902207475bea1293c2ff3ebe4d19714fb not found: ID does not exist" containerID="388c4bbb8c5ac3bffe4a9d48faa2e81902207475bea1293c2ff3ebe4d19714fb" Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.565790 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"388c4bbb8c5ac3bffe4a9d48faa2e81902207475bea1293c2ff3ebe4d19714fb"} err="failed to get container status \"388c4bbb8c5ac3bffe4a9d48faa2e81902207475bea1293c2ff3ebe4d19714fb\": rpc error: code = NotFound desc = could not find container \"388c4bbb8c5ac3bffe4a9d48faa2e81902207475bea1293c2ff3ebe4d19714fb\": container with ID starting with 388c4bbb8c5ac3bffe4a9d48faa2e81902207475bea1293c2ff3ebe4d19714fb not found: ID does not exist" Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.565818 4844 scope.go:117] "RemoveContainer" containerID="b38fabc56cb1d6525ee25c65ee1377abd544737231f74ea4c04ac52f45352277" Jan 26 13:56:38 crc kubenswrapper[4844]: E0126 13:56:38.566249 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b38fabc56cb1d6525ee25c65ee1377abd544737231f74ea4c04ac52f45352277\": container with ID starting with b38fabc56cb1d6525ee25c65ee1377abd544737231f74ea4c04ac52f45352277 not found: ID does not exist" containerID="b38fabc56cb1d6525ee25c65ee1377abd544737231f74ea4c04ac52f45352277" Jan 26 13:56:38 crc kubenswrapper[4844]: I0126 13:56:38.566276 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b38fabc56cb1d6525ee25c65ee1377abd544737231f74ea4c04ac52f45352277"} err="failed to get container status \"b38fabc56cb1d6525ee25c65ee1377abd544737231f74ea4c04ac52f45352277\": rpc error: code = NotFound desc = could not find container \"b38fabc56cb1d6525ee25c65ee1377abd544737231f74ea4c04ac52f45352277\": container with ID starting with b38fabc56cb1d6525ee25c65ee1377abd544737231f74ea4c04ac52f45352277 not found: ID does not exist" Jan 26 13:56:39 crc kubenswrapper[4844]: I0126 13:56:39.323459 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f32e298-5b32-4eda-9772-6466ef5f3596" path="/var/lib/kubelet/pods/0f32e298-5b32-4eda-9772-6466ef5f3596/volumes" Jan 26 13:56:45 crc kubenswrapper[4844]: I0126 13:56:45.313711 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:56:45 crc kubenswrapper[4844]: E0126 13:56:45.314567 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:56:58 crc kubenswrapper[4844]: I0126 13:56:58.314342 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:56:58 crc kubenswrapper[4844]: E0126 13:56:58.315586 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:57:12 crc kubenswrapper[4844]: I0126 13:57:12.313887 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:57:12 crc kubenswrapper[4844]: E0126 13:57:12.315040 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:57:24 crc kubenswrapper[4844]: I0126 13:57:24.313708 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:57:24 crc kubenswrapper[4844]: E0126 13:57:24.314849 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:57:37 crc kubenswrapper[4844]: I0126 13:57:37.314048 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:57:37 crc kubenswrapper[4844]: E0126 13:57:37.315408 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:57:48 crc kubenswrapper[4844]: I0126 13:57:48.314498 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:57:48 crc kubenswrapper[4844]: E0126 13:57:48.315648 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:58:00 crc kubenswrapper[4844]: I0126 13:58:00.320387 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:58:00 crc kubenswrapper[4844]: E0126 13:58:00.322240 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:58:14 crc kubenswrapper[4844]: I0126 13:58:14.313800 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:58:14 crc kubenswrapper[4844]: E0126 13:58:14.315077 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:58:29 crc kubenswrapper[4844]: I0126 13:58:29.313837 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:58:29 crc kubenswrapper[4844]: E0126 13:58:29.314675 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:58:41 crc kubenswrapper[4844]: I0126 13:58:41.319545 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:58:41 crc kubenswrapper[4844]: E0126 13:58:41.320789 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:58:53 crc kubenswrapper[4844]: I0126 13:58:53.320840 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:58:53 crc kubenswrapper[4844]: E0126 13:58:53.321645 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:59:08 crc kubenswrapper[4844]: I0126 13:59:08.313922 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:59:08 crc kubenswrapper[4844]: E0126 13:59:08.315333 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:59:23 crc kubenswrapper[4844]: I0126 13:59:23.326265 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:59:23 crc kubenswrapper[4844]: E0126 13:59:23.327301 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:59:34 crc kubenswrapper[4844]: I0126 13:59:34.314074 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:59:34 crc kubenswrapper[4844]: E0126 13:59:34.315232 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:59:38 crc kubenswrapper[4844]: I0126 13:59:38.848452 4844 scope.go:117] "RemoveContainer" containerID="36c25f32cdbf65b7fdac7c7784925e3b182137bd84fda610430fa68b417c4ad7" Jan 26 13:59:38 crc kubenswrapper[4844]: I0126 13:59:38.892053 4844 scope.go:117] "RemoveContainer" containerID="8b54056b1a5a087a982d7342d54e000e5e981270ae95dfe7d315c7cd41d906c6" Jan 26 13:59:46 crc kubenswrapper[4844]: I0126 13:59:46.314399 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:59:46 crc kubenswrapper[4844]: E0126 13:59:46.315744 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 13:59:58 crc kubenswrapper[4844]: I0126 13:59:58.314580 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 13:59:58 crc kubenswrapper[4844]: E0126 13:59:58.316021 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.177678 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9"] Jan 26 14:00:00 crc kubenswrapper[4844]: E0126 14:00:00.179111 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f32e298-5b32-4eda-9772-6466ef5f3596" containerName="extract-content" Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.179152 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f32e298-5b32-4eda-9772-6466ef5f3596" containerName="extract-content" Jan 26 14:00:00 crc kubenswrapper[4844]: E0126 14:00:00.179188 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f32e298-5b32-4eda-9772-6466ef5f3596" containerName="extract-utilities" Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.179197 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f32e298-5b32-4eda-9772-6466ef5f3596" containerName="extract-utilities" Jan 26 14:00:00 crc kubenswrapper[4844]: E0126 14:00:00.179213 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f32e298-5b32-4eda-9772-6466ef5f3596" containerName="registry-server" Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.179221 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f32e298-5b32-4eda-9772-6466ef5f3596" containerName="registry-server" Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.179478 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f32e298-5b32-4eda-9772-6466ef5f3596" containerName="registry-server" Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.180432 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9" Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.182482 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.182671 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.190776 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9"] Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.316746 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj2vg\" (UniqueName: \"kubernetes.io/projected/60fa6053-be49-467a-9c66-92823955a811-kube-api-access-rj2vg\") pod \"collect-profiles-29490600-b9nq9\" (UID: \"60fa6053-be49-467a-9c66-92823955a811\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9" Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.317088 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60fa6053-be49-467a-9c66-92823955a811-secret-volume\") pod \"collect-profiles-29490600-b9nq9\" (UID: \"60fa6053-be49-467a-9c66-92823955a811\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9" Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.317156 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60fa6053-be49-467a-9c66-92823955a811-config-volume\") pod \"collect-profiles-29490600-b9nq9\" (UID: \"60fa6053-be49-467a-9c66-92823955a811\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9" Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.418512 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj2vg\" (UniqueName: \"kubernetes.io/projected/60fa6053-be49-467a-9c66-92823955a811-kube-api-access-rj2vg\") pod \"collect-profiles-29490600-b9nq9\" (UID: \"60fa6053-be49-467a-9c66-92823955a811\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9" Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.418611 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60fa6053-be49-467a-9c66-92823955a811-secret-volume\") pod \"collect-profiles-29490600-b9nq9\" (UID: \"60fa6053-be49-467a-9c66-92823955a811\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9" Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.418706 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60fa6053-be49-467a-9c66-92823955a811-config-volume\") pod \"collect-profiles-29490600-b9nq9\" (UID: \"60fa6053-be49-467a-9c66-92823955a811\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9" Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.419947 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60fa6053-be49-467a-9c66-92823955a811-config-volume\") pod \"collect-profiles-29490600-b9nq9\" (UID: \"60fa6053-be49-467a-9c66-92823955a811\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9" Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.759377 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60fa6053-be49-467a-9c66-92823955a811-secret-volume\") pod \"collect-profiles-29490600-b9nq9\" (UID: \"60fa6053-be49-467a-9c66-92823955a811\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9" Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.759861 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj2vg\" (UniqueName: \"kubernetes.io/projected/60fa6053-be49-467a-9c66-92823955a811-kube-api-access-rj2vg\") pod \"collect-profiles-29490600-b9nq9\" (UID: \"60fa6053-be49-467a-9c66-92823955a811\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9" Jan 26 14:00:00 crc kubenswrapper[4844]: I0126 14:00:00.816423 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9" Jan 26 14:00:01 crc kubenswrapper[4844]: I0126 14:00:01.298866 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9"] Jan 26 14:00:01 crc kubenswrapper[4844]: I0126 14:00:01.628095 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9" event={"ID":"60fa6053-be49-467a-9c66-92823955a811","Type":"ContainerStarted","Data":"d03cf587c05f4a93fc2fc7353d6de8c19326bb8bd2866dc91035415f7c551812"} Jan 26 14:00:01 crc kubenswrapper[4844]: I0126 14:00:01.629120 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9" event={"ID":"60fa6053-be49-467a-9c66-92823955a811","Type":"ContainerStarted","Data":"b31e98abbe14daeca8e789ea163b8e3c529dbe573f441d1096fdc8bb08a63018"} Jan 26 14:00:01 crc kubenswrapper[4844]: I0126 14:00:01.650680 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9" podStartSLOduration=1.650660308 podStartE2EDuration="1.650660308s" podCreationTimestamp="2026-01-26 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:00:01.645714887 +0000 UTC m=+4578.579082509" watchObservedRunningTime="2026-01-26 14:00:01.650660308 +0000 UTC m=+4578.584027920" Jan 26 14:00:02 crc kubenswrapper[4844]: I0126 14:00:02.644823 4844 generic.go:334] "Generic (PLEG): container finished" podID="60fa6053-be49-467a-9c66-92823955a811" containerID="d03cf587c05f4a93fc2fc7353d6de8c19326bb8bd2866dc91035415f7c551812" exitCode=0 Jan 26 14:00:02 crc kubenswrapper[4844]: I0126 14:00:02.644873 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9" event={"ID":"60fa6053-be49-467a-9c66-92823955a811","Type":"ContainerDied","Data":"d03cf587c05f4a93fc2fc7353d6de8c19326bb8bd2866dc91035415f7c551812"} Jan 26 14:00:04 crc kubenswrapper[4844]: I0126 14:00:04.179984 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9" Jan 26 14:00:04 crc kubenswrapper[4844]: I0126 14:00:04.286884 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60fa6053-be49-467a-9c66-92823955a811-secret-volume\") pod \"60fa6053-be49-467a-9c66-92823955a811\" (UID: \"60fa6053-be49-467a-9c66-92823955a811\") " Jan 26 14:00:04 crc kubenswrapper[4844]: I0126 14:00:04.286944 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60fa6053-be49-467a-9c66-92823955a811-config-volume\") pod \"60fa6053-be49-467a-9c66-92823955a811\" (UID: \"60fa6053-be49-467a-9c66-92823955a811\") " Jan 26 14:00:04 crc kubenswrapper[4844]: I0126 14:00:04.287043 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj2vg\" (UniqueName: \"kubernetes.io/projected/60fa6053-be49-467a-9c66-92823955a811-kube-api-access-rj2vg\") pod \"60fa6053-be49-467a-9c66-92823955a811\" (UID: \"60fa6053-be49-467a-9c66-92823955a811\") " Jan 26 14:00:04 crc kubenswrapper[4844]: I0126 14:00:04.298744 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60fa6053-be49-467a-9c66-92823955a811-config-volume" (OuterVolumeSpecName: "config-volume") pod "60fa6053-be49-467a-9c66-92823955a811" (UID: "60fa6053-be49-467a-9c66-92823955a811"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:00:04 crc kubenswrapper[4844]: I0126 14:00:04.305781 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fa6053-be49-467a-9c66-92823955a811-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "60fa6053-be49-467a-9c66-92823955a811" (UID: "60fa6053-be49-467a-9c66-92823955a811"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:00:04 crc kubenswrapper[4844]: I0126 14:00:04.329383 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60fa6053-be49-467a-9c66-92823955a811-kube-api-access-rj2vg" (OuterVolumeSpecName: "kube-api-access-rj2vg") pod "60fa6053-be49-467a-9c66-92823955a811" (UID: "60fa6053-be49-467a-9c66-92823955a811"). InnerVolumeSpecName "kube-api-access-rj2vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:00:04 crc kubenswrapper[4844]: I0126 14:00:04.380855 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js"] Jan 26 14:00:04 crc kubenswrapper[4844]: I0126 14:00:04.389161 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490555-7t4js"] Jan 26 14:00:04 crc kubenswrapper[4844]: I0126 14:00:04.389877 4844 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60fa6053-be49-467a-9c66-92823955a811-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:00:04 crc kubenswrapper[4844]: I0126 14:00:04.389904 4844 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60fa6053-be49-467a-9c66-92823955a811-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:00:04 crc kubenswrapper[4844]: I0126 14:00:04.389914 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj2vg\" (UniqueName: \"kubernetes.io/projected/60fa6053-be49-467a-9c66-92823955a811-kube-api-access-rj2vg\") on node \"crc\" DevicePath \"\"" Jan 26 14:00:04 crc kubenswrapper[4844]: I0126 14:00:04.665958 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9" event={"ID":"60fa6053-be49-467a-9c66-92823955a811","Type":"ContainerDied","Data":"b31e98abbe14daeca8e789ea163b8e3c529dbe573f441d1096fdc8bb08a63018"} Jan 26 14:00:04 crc kubenswrapper[4844]: I0126 14:00:04.666354 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b31e98abbe14daeca8e789ea163b8e3c529dbe573f441d1096fdc8bb08a63018" Jan 26 14:00:04 crc kubenswrapper[4844]: I0126 14:00:04.666152 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9" Jan 26 14:00:05 crc kubenswrapper[4844]: I0126 14:00:05.330311 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90ad5427-9763-4ad8-81c9-557978090fbc" path="/var/lib/kubelet/pods/90ad5427-9763-4ad8-81c9-557978090fbc/volumes" Jan 26 14:00:08 crc kubenswrapper[4844]: I0126 14:00:08.419192 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vgnsq"] Jan 26 14:00:08 crc kubenswrapper[4844]: E0126 14:00:08.420230 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60fa6053-be49-467a-9c66-92823955a811" containerName="collect-profiles" Jan 26 14:00:08 crc kubenswrapper[4844]: I0126 14:00:08.420250 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="60fa6053-be49-467a-9c66-92823955a811" containerName="collect-profiles" Jan 26 14:00:08 crc kubenswrapper[4844]: I0126 14:00:08.420557 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="60fa6053-be49-467a-9c66-92823955a811" containerName="collect-profiles" Jan 26 14:00:08 crc kubenswrapper[4844]: I0126 14:00:08.422513 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgnsq" Jan 26 14:00:08 crc kubenswrapper[4844]: I0126 14:00:08.455801 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vgnsq"] Jan 26 14:00:08 crc kubenswrapper[4844]: I0126 14:00:08.470632 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b20a5630-8c96-4445-bd95-59fbee04a87d-utilities\") pod \"certified-operators-vgnsq\" (UID: \"b20a5630-8c96-4445-bd95-59fbee04a87d\") " pod="openshift-marketplace/certified-operators-vgnsq" Jan 26 14:00:08 crc kubenswrapper[4844]: I0126 14:00:08.470699 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b20a5630-8c96-4445-bd95-59fbee04a87d-catalog-content\") pod \"certified-operators-vgnsq\" (UID: \"b20a5630-8c96-4445-bd95-59fbee04a87d\") " pod="openshift-marketplace/certified-operators-vgnsq" Jan 26 14:00:08 crc kubenswrapper[4844]: I0126 14:00:08.470819 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbfkl\" (UniqueName: \"kubernetes.io/projected/b20a5630-8c96-4445-bd95-59fbee04a87d-kube-api-access-vbfkl\") pod \"certified-operators-vgnsq\" (UID: \"b20a5630-8c96-4445-bd95-59fbee04a87d\") " pod="openshift-marketplace/certified-operators-vgnsq" Jan 26 14:00:08 crc kubenswrapper[4844]: I0126 14:00:08.572154 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b20a5630-8c96-4445-bd95-59fbee04a87d-catalog-content\") pod \"certified-operators-vgnsq\" (UID: \"b20a5630-8c96-4445-bd95-59fbee04a87d\") " pod="openshift-marketplace/certified-operators-vgnsq" Jan 26 14:00:08 crc kubenswrapper[4844]: I0126 14:00:08.572272 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbfkl\" (UniqueName: \"kubernetes.io/projected/b20a5630-8c96-4445-bd95-59fbee04a87d-kube-api-access-vbfkl\") pod \"certified-operators-vgnsq\" (UID: \"b20a5630-8c96-4445-bd95-59fbee04a87d\") " pod="openshift-marketplace/certified-operators-vgnsq" Jan 26 14:00:08 crc kubenswrapper[4844]: I0126 14:00:08.572394 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b20a5630-8c96-4445-bd95-59fbee04a87d-utilities\") pod \"certified-operators-vgnsq\" (UID: \"b20a5630-8c96-4445-bd95-59fbee04a87d\") " pod="openshift-marketplace/certified-operators-vgnsq" Jan 26 14:00:08 crc kubenswrapper[4844]: I0126 14:00:08.572803 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b20a5630-8c96-4445-bd95-59fbee04a87d-utilities\") pod \"certified-operators-vgnsq\" (UID: \"b20a5630-8c96-4445-bd95-59fbee04a87d\") " pod="openshift-marketplace/certified-operators-vgnsq" Jan 26 14:00:08 crc kubenswrapper[4844]: I0126 14:00:08.572847 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b20a5630-8c96-4445-bd95-59fbee04a87d-catalog-content\") pod \"certified-operators-vgnsq\" (UID: \"b20a5630-8c96-4445-bd95-59fbee04a87d\") " pod="openshift-marketplace/certified-operators-vgnsq" Jan 26 14:00:08 crc kubenswrapper[4844]: I0126 14:00:08.593962 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbfkl\" (UniqueName: \"kubernetes.io/projected/b20a5630-8c96-4445-bd95-59fbee04a87d-kube-api-access-vbfkl\") pod \"certified-operators-vgnsq\" (UID: \"b20a5630-8c96-4445-bd95-59fbee04a87d\") " pod="openshift-marketplace/certified-operators-vgnsq" Jan 26 14:00:08 crc kubenswrapper[4844]: I0126 14:00:08.752493 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgnsq" Jan 26 14:00:09 crc kubenswrapper[4844]: I0126 14:00:09.328184 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vgnsq"] Jan 26 14:00:09 crc kubenswrapper[4844]: I0126 14:00:09.725169 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgnsq" event={"ID":"b20a5630-8c96-4445-bd95-59fbee04a87d","Type":"ContainerStarted","Data":"241df3e17b8aed660ccf5fefb4fb77e2c5b3794cc41a12b25af0256f2d940237"} Jan 26 14:00:10 crc kubenswrapper[4844]: I0126 14:00:10.313583 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 14:00:10 crc kubenswrapper[4844]: E0126 14:00:10.314128 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:00:10 crc kubenswrapper[4844]: I0126 14:00:10.739658 4844 generic.go:334] "Generic (PLEG): container finished" podID="b20a5630-8c96-4445-bd95-59fbee04a87d" containerID="14df2aa2948055f4bb2b9ac33827b9f33ed00eb8b07db11de534225a95b79cde" exitCode=0 Jan 26 14:00:10 crc kubenswrapper[4844]: I0126 14:00:10.739785 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgnsq" event={"ID":"b20a5630-8c96-4445-bd95-59fbee04a87d","Type":"ContainerDied","Data":"14df2aa2948055f4bb2b9ac33827b9f33ed00eb8b07db11de534225a95b79cde"} Jan 26 14:00:10 crc kubenswrapper[4844]: I0126 14:00:10.742635 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:00:12 crc kubenswrapper[4844]: I0126 14:00:12.763151 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgnsq" event={"ID":"b20a5630-8c96-4445-bd95-59fbee04a87d","Type":"ContainerStarted","Data":"c7ba4dbf7f0ecc73f5ce499a201974e4e0aff854452a3b3bcd8f17bf66c7d9b0"} Jan 26 14:00:13 crc kubenswrapper[4844]: I0126 14:00:13.776946 4844 generic.go:334] "Generic (PLEG): container finished" podID="b20a5630-8c96-4445-bd95-59fbee04a87d" containerID="c7ba4dbf7f0ecc73f5ce499a201974e4e0aff854452a3b3bcd8f17bf66c7d9b0" exitCode=0 Jan 26 14:00:13 crc kubenswrapper[4844]: I0126 14:00:13.777053 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgnsq" event={"ID":"b20a5630-8c96-4445-bd95-59fbee04a87d","Type":"ContainerDied","Data":"c7ba4dbf7f0ecc73f5ce499a201974e4e0aff854452a3b3bcd8f17bf66c7d9b0"} Jan 26 14:00:14 crc kubenswrapper[4844]: I0126 14:00:14.792501 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgnsq" event={"ID":"b20a5630-8c96-4445-bd95-59fbee04a87d","Type":"ContainerStarted","Data":"9da0a3f161eaa228f5ced32e1990fabdb6e39a90a8c9d79df7981e7c204d2baf"} Jan 26 14:00:14 crc kubenswrapper[4844]: I0126 14:00:14.820046 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vgnsq" podStartSLOduration=3.340637595 podStartE2EDuration="6.820028779s" podCreationTimestamp="2026-01-26 14:00:08 +0000 UTC" firstStartedPulling="2026-01-26 14:00:10.74219214 +0000 UTC m=+4587.675559792" lastFinishedPulling="2026-01-26 14:00:14.221583354 +0000 UTC m=+4591.154950976" observedRunningTime="2026-01-26 14:00:14.814730631 +0000 UTC m=+4591.748098243" watchObservedRunningTime="2026-01-26 14:00:14.820028779 +0000 UTC m=+4591.753396401" Jan 26 14:00:18 crc kubenswrapper[4844]: I0126 14:00:18.753364 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vgnsq" Jan 26 14:00:18 crc kubenswrapper[4844]: I0126 14:00:18.754008 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vgnsq" Jan 26 14:00:18 crc kubenswrapper[4844]: I0126 14:00:18.817557 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vgnsq" Jan 26 14:00:21 crc kubenswrapper[4844]: I0126 14:00:21.314352 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 14:00:21 crc kubenswrapper[4844]: E0126 14:00:21.315642 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:00:28 crc kubenswrapper[4844]: I0126 14:00:28.807815 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vgnsq" Jan 26 14:00:31 crc kubenswrapper[4844]: I0126 14:00:31.798117 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vgnsq"] Jan 26 14:00:31 crc kubenswrapper[4844]: I0126 14:00:31.798977 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vgnsq" podUID="b20a5630-8c96-4445-bd95-59fbee04a87d" containerName="registry-server" containerID="cri-o://9da0a3f161eaa228f5ced32e1990fabdb6e39a90a8c9d79df7981e7c204d2baf" gracePeriod=2 Jan 26 14:00:31 crc kubenswrapper[4844]: I0126 14:00:31.985319 4844 generic.go:334] "Generic (PLEG): container finished" podID="b20a5630-8c96-4445-bd95-59fbee04a87d" containerID="9da0a3f161eaa228f5ced32e1990fabdb6e39a90a8c9d79df7981e7c204d2baf" exitCode=0 Jan 26 14:00:31 crc kubenswrapper[4844]: I0126 14:00:31.985352 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgnsq" event={"ID":"b20a5630-8c96-4445-bd95-59fbee04a87d","Type":"ContainerDied","Data":"9da0a3f161eaa228f5ced32e1990fabdb6e39a90a8c9d79df7981e7c204d2baf"} Jan 26 14:00:32 crc kubenswrapper[4844]: I0126 14:00:32.381762 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgnsq" Jan 26 14:00:32 crc kubenswrapper[4844]: I0126 14:00:32.529854 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b20a5630-8c96-4445-bd95-59fbee04a87d-utilities\") pod \"b20a5630-8c96-4445-bd95-59fbee04a87d\" (UID: \"b20a5630-8c96-4445-bd95-59fbee04a87d\") " Jan 26 14:00:32 crc kubenswrapper[4844]: I0126 14:00:32.529962 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b20a5630-8c96-4445-bd95-59fbee04a87d-catalog-content\") pod \"b20a5630-8c96-4445-bd95-59fbee04a87d\" (UID: \"b20a5630-8c96-4445-bd95-59fbee04a87d\") " Jan 26 14:00:32 crc kubenswrapper[4844]: I0126 14:00:32.530071 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbfkl\" (UniqueName: \"kubernetes.io/projected/b20a5630-8c96-4445-bd95-59fbee04a87d-kube-api-access-vbfkl\") pod \"b20a5630-8c96-4445-bd95-59fbee04a87d\" (UID: \"b20a5630-8c96-4445-bd95-59fbee04a87d\") " Jan 26 14:00:32 crc kubenswrapper[4844]: I0126 14:00:32.531787 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b20a5630-8c96-4445-bd95-59fbee04a87d-utilities" (OuterVolumeSpecName: "utilities") pod "b20a5630-8c96-4445-bd95-59fbee04a87d" (UID: "b20a5630-8c96-4445-bd95-59fbee04a87d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:00:32 crc kubenswrapper[4844]: I0126 14:00:32.532942 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b20a5630-8c96-4445-bd95-59fbee04a87d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:00:32 crc kubenswrapper[4844]: I0126 14:00:32.536796 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b20a5630-8c96-4445-bd95-59fbee04a87d-kube-api-access-vbfkl" (OuterVolumeSpecName: "kube-api-access-vbfkl") pod "b20a5630-8c96-4445-bd95-59fbee04a87d" (UID: "b20a5630-8c96-4445-bd95-59fbee04a87d"). InnerVolumeSpecName "kube-api-access-vbfkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:00:32 crc kubenswrapper[4844]: I0126 14:00:32.602795 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b20a5630-8c96-4445-bd95-59fbee04a87d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b20a5630-8c96-4445-bd95-59fbee04a87d" (UID: "b20a5630-8c96-4445-bd95-59fbee04a87d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:00:32 crc kubenswrapper[4844]: I0126 14:00:32.635930 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbfkl\" (UniqueName: \"kubernetes.io/projected/b20a5630-8c96-4445-bd95-59fbee04a87d-kube-api-access-vbfkl\") on node \"crc\" DevicePath \"\"" Jan 26 14:00:32 crc kubenswrapper[4844]: I0126 14:00:32.635988 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b20a5630-8c96-4445-bd95-59fbee04a87d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:00:32 crc kubenswrapper[4844]: I0126 14:00:32.998663 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgnsq" event={"ID":"b20a5630-8c96-4445-bd95-59fbee04a87d","Type":"ContainerDied","Data":"241df3e17b8aed660ccf5fefb4fb77e2c5b3794cc41a12b25af0256f2d940237"} Jan 26 14:00:33 crc kubenswrapper[4844]: I0126 14:00:32.998725 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgnsq" Jan 26 14:00:33 crc kubenswrapper[4844]: I0126 14:00:32.999011 4844 scope.go:117] "RemoveContainer" containerID="9da0a3f161eaa228f5ced32e1990fabdb6e39a90a8c9d79df7981e7c204d2baf" Jan 26 14:00:33 crc kubenswrapper[4844]: I0126 14:00:33.030860 4844 scope.go:117] "RemoveContainer" containerID="c7ba4dbf7f0ecc73f5ce499a201974e4e0aff854452a3b3bcd8f17bf66c7d9b0" Jan 26 14:00:33 crc kubenswrapper[4844]: I0126 14:00:33.051766 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vgnsq"] Jan 26 14:00:33 crc kubenswrapper[4844]: I0126 14:00:33.064360 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vgnsq"] Jan 26 14:00:33 crc kubenswrapper[4844]: I0126 14:00:33.075163 4844 scope.go:117] "RemoveContainer" containerID="14df2aa2948055f4bb2b9ac33827b9f33ed00eb8b07db11de534225a95b79cde" Jan 26 14:00:33 crc kubenswrapper[4844]: I0126 14:00:33.327029 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b20a5630-8c96-4445-bd95-59fbee04a87d" path="/var/lib/kubelet/pods/b20a5630-8c96-4445-bd95-59fbee04a87d/volumes" Jan 26 14:00:34 crc kubenswrapper[4844]: I0126 14:00:34.020007 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wwqbw"] Jan 26 14:00:34 crc kubenswrapper[4844]: E0126 14:00:34.021153 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b20a5630-8c96-4445-bd95-59fbee04a87d" containerName="extract-utilities" Jan 26 14:00:34 crc kubenswrapper[4844]: I0126 14:00:34.021179 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="b20a5630-8c96-4445-bd95-59fbee04a87d" containerName="extract-utilities" Jan 26 14:00:34 crc kubenswrapper[4844]: E0126 14:00:34.021231 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b20a5630-8c96-4445-bd95-59fbee04a87d" containerName="registry-server" Jan 26 14:00:34 crc kubenswrapper[4844]: I0126 14:00:34.021245 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="b20a5630-8c96-4445-bd95-59fbee04a87d" containerName="registry-server" Jan 26 14:00:34 crc kubenswrapper[4844]: E0126 14:00:34.021267 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b20a5630-8c96-4445-bd95-59fbee04a87d" containerName="extract-content" Jan 26 14:00:34 crc kubenswrapper[4844]: I0126 14:00:34.021282 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="b20a5630-8c96-4445-bd95-59fbee04a87d" containerName="extract-content" Jan 26 14:00:34 crc kubenswrapper[4844]: I0126 14:00:34.021689 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="b20a5630-8c96-4445-bd95-59fbee04a87d" containerName="registry-server" Jan 26 14:00:34 crc kubenswrapper[4844]: I0126 14:00:34.024293 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wwqbw" Jan 26 14:00:34 crc kubenswrapper[4844]: I0126 14:00:34.045441 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wwqbw"] Jan 26 14:00:34 crc kubenswrapper[4844]: I0126 14:00:34.178525 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d82cb0e3-c408-45fc-b05a-136c604cfe89-utilities\") pod \"community-operators-wwqbw\" (UID: \"d82cb0e3-c408-45fc-b05a-136c604cfe89\") " pod="openshift-marketplace/community-operators-wwqbw" Jan 26 14:00:34 crc kubenswrapper[4844]: I0126 14:00:34.178620 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5hrl\" (UniqueName: \"kubernetes.io/projected/d82cb0e3-c408-45fc-b05a-136c604cfe89-kube-api-access-k5hrl\") pod \"community-operators-wwqbw\" (UID: \"d82cb0e3-c408-45fc-b05a-136c604cfe89\") " pod="openshift-marketplace/community-operators-wwqbw" Jan 26 14:00:34 crc kubenswrapper[4844]: I0126 14:00:34.178897 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d82cb0e3-c408-45fc-b05a-136c604cfe89-catalog-content\") pod \"community-operators-wwqbw\" (UID: \"d82cb0e3-c408-45fc-b05a-136c604cfe89\") " pod="openshift-marketplace/community-operators-wwqbw" Jan 26 14:00:34 crc kubenswrapper[4844]: I0126 14:00:34.280844 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d82cb0e3-c408-45fc-b05a-136c604cfe89-catalog-content\") pod \"community-operators-wwqbw\" (UID: \"d82cb0e3-c408-45fc-b05a-136c604cfe89\") " pod="openshift-marketplace/community-operators-wwqbw" Jan 26 14:00:34 crc kubenswrapper[4844]: I0126 14:00:34.281137 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d82cb0e3-c408-45fc-b05a-136c604cfe89-utilities\") pod \"community-operators-wwqbw\" (UID: \"d82cb0e3-c408-45fc-b05a-136c604cfe89\") " pod="openshift-marketplace/community-operators-wwqbw" Jan 26 14:00:34 crc kubenswrapper[4844]: I0126 14:00:34.281187 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5hrl\" (UniqueName: \"kubernetes.io/projected/d82cb0e3-c408-45fc-b05a-136c604cfe89-kube-api-access-k5hrl\") pod \"community-operators-wwqbw\" (UID: \"d82cb0e3-c408-45fc-b05a-136c604cfe89\") " pod="openshift-marketplace/community-operators-wwqbw" Jan 26 14:00:34 crc kubenswrapper[4844]: I0126 14:00:34.281887 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d82cb0e3-c408-45fc-b05a-136c604cfe89-catalog-content\") pod \"community-operators-wwqbw\" (UID: \"d82cb0e3-c408-45fc-b05a-136c604cfe89\") " pod="openshift-marketplace/community-operators-wwqbw" Jan 26 14:00:34 crc kubenswrapper[4844]: I0126 14:00:34.282020 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d82cb0e3-c408-45fc-b05a-136c604cfe89-utilities\") pod \"community-operators-wwqbw\" (UID: \"d82cb0e3-c408-45fc-b05a-136c604cfe89\") " pod="openshift-marketplace/community-operators-wwqbw" Jan 26 14:00:34 crc kubenswrapper[4844]: I0126 14:00:34.303646 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5hrl\" (UniqueName: \"kubernetes.io/projected/d82cb0e3-c408-45fc-b05a-136c604cfe89-kube-api-access-k5hrl\") pod \"community-operators-wwqbw\" (UID: \"d82cb0e3-c408-45fc-b05a-136c604cfe89\") " pod="openshift-marketplace/community-operators-wwqbw" Jan 26 14:00:34 crc kubenswrapper[4844]: I0126 14:00:34.368328 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wwqbw" Jan 26 14:00:34 crc kubenswrapper[4844]: I0126 14:00:34.948330 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wwqbw"] Jan 26 14:00:35 crc kubenswrapper[4844]: I0126 14:00:35.046372 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wwqbw" event={"ID":"d82cb0e3-c408-45fc-b05a-136c604cfe89","Type":"ContainerStarted","Data":"2c9a777279c76a0531277981874e86abe0446d735530f9d783c28808a0e37a3e"} Jan 26 14:00:36 crc kubenswrapper[4844]: I0126 14:00:36.056970 4844 generic.go:334] "Generic (PLEG): container finished" podID="d82cb0e3-c408-45fc-b05a-136c604cfe89" containerID="d7bd8797b1ffe3f4a568b811d08dd9c7456d40f6a2131b90e27b95f8079b586b" exitCode=0 Jan 26 14:00:36 crc kubenswrapper[4844]: I0126 14:00:36.057206 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wwqbw" event={"ID":"d82cb0e3-c408-45fc-b05a-136c604cfe89","Type":"ContainerDied","Data":"d7bd8797b1ffe3f4a568b811d08dd9c7456d40f6a2131b90e27b95f8079b586b"} Jan 26 14:00:36 crc kubenswrapper[4844]: I0126 14:00:36.313999 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 14:00:36 crc kubenswrapper[4844]: E0126 14:00:36.314672 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:00:38 crc kubenswrapper[4844]: I0126 14:00:38.080658 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wwqbw" event={"ID":"d82cb0e3-c408-45fc-b05a-136c604cfe89","Type":"ContainerStarted","Data":"811fe489b7078dced46d1951b581011867225a38623e6d84d1ae5bab627afeed"} Jan 26 14:00:38 crc kubenswrapper[4844]: I0126 14:00:38.956006 4844 scope.go:117] "RemoveContainer" containerID="45390072dfb01c4be7a1919aa93b0635d4251eab817368449799b3b552c48972" Jan 26 14:00:38 crc kubenswrapper[4844]: I0126 14:00:38.991885 4844 scope.go:117] "RemoveContainer" containerID="3b1d1e0c3df36dc3e0c4c2f2545cfa8a082d826b1686258c22dffe97ed929c5a" Jan 26 14:00:39 crc kubenswrapper[4844]: I0126 14:00:39.093382 4844 generic.go:334] "Generic (PLEG): container finished" podID="d82cb0e3-c408-45fc-b05a-136c604cfe89" containerID="811fe489b7078dced46d1951b581011867225a38623e6d84d1ae5bab627afeed" exitCode=0 Jan 26 14:00:39 crc kubenswrapper[4844]: I0126 14:00:39.093476 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wwqbw" event={"ID":"d82cb0e3-c408-45fc-b05a-136c604cfe89","Type":"ContainerDied","Data":"811fe489b7078dced46d1951b581011867225a38623e6d84d1ae5bab627afeed"} Jan 26 14:00:40 crc kubenswrapper[4844]: I0126 14:00:40.109095 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wwqbw" event={"ID":"d82cb0e3-c408-45fc-b05a-136c604cfe89","Type":"ContainerStarted","Data":"cb3ce7cce6e951c7ab5be2df9aae273ca15e7fbc11fd7f1f42e635acbc2e5bbd"} Jan 26 14:00:40 crc kubenswrapper[4844]: I0126 14:00:40.135043 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wwqbw" podStartSLOduration=3.631418418 podStartE2EDuration="7.135020839s" podCreationTimestamp="2026-01-26 14:00:33 +0000 UTC" firstStartedPulling="2026-01-26 14:00:36.059191638 +0000 UTC m=+4612.992559250" lastFinishedPulling="2026-01-26 14:00:39.562794059 +0000 UTC m=+4616.496161671" observedRunningTime="2026-01-26 14:00:40.127762733 +0000 UTC m=+4617.061130375" watchObservedRunningTime="2026-01-26 14:00:40.135020839 +0000 UTC m=+4617.068388451" Jan 26 14:00:44 crc kubenswrapper[4844]: I0126 14:00:44.368653 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wwqbw" Jan 26 14:00:44 crc kubenswrapper[4844]: I0126 14:00:44.369239 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wwqbw" Jan 26 14:00:44 crc kubenswrapper[4844]: I0126 14:00:44.487871 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wwqbw" Jan 26 14:00:46 crc kubenswrapper[4844]: I0126 14:00:46.014148 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wwqbw" Jan 26 14:00:48 crc kubenswrapper[4844]: I0126 14:00:48.598793 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wwqbw"] Jan 26 14:00:48 crc kubenswrapper[4844]: I0126 14:00:48.599516 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wwqbw" podUID="d82cb0e3-c408-45fc-b05a-136c604cfe89" containerName="registry-server" containerID="cri-o://cb3ce7cce6e951c7ab5be2df9aae273ca15e7fbc11fd7f1f42e635acbc2e5bbd" gracePeriod=2 Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.121379 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wwqbw" Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.159386 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d82cb0e3-c408-45fc-b05a-136c604cfe89-catalog-content\") pod \"d82cb0e3-c408-45fc-b05a-136c604cfe89\" (UID: \"d82cb0e3-c408-45fc-b05a-136c604cfe89\") " Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.159472 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d82cb0e3-c408-45fc-b05a-136c604cfe89-utilities\") pod \"d82cb0e3-c408-45fc-b05a-136c604cfe89\" (UID: \"d82cb0e3-c408-45fc-b05a-136c604cfe89\") " Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.159609 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5hrl\" (UniqueName: \"kubernetes.io/projected/d82cb0e3-c408-45fc-b05a-136c604cfe89-kube-api-access-k5hrl\") pod \"d82cb0e3-c408-45fc-b05a-136c604cfe89\" (UID: \"d82cb0e3-c408-45fc-b05a-136c604cfe89\") " Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.165008 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d82cb0e3-c408-45fc-b05a-136c604cfe89-utilities" (OuterVolumeSpecName: "utilities") pod "d82cb0e3-c408-45fc-b05a-136c604cfe89" (UID: "d82cb0e3-c408-45fc-b05a-136c604cfe89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.167011 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d82cb0e3-c408-45fc-b05a-136c604cfe89-kube-api-access-k5hrl" (OuterVolumeSpecName: "kube-api-access-k5hrl") pod "d82cb0e3-c408-45fc-b05a-136c604cfe89" (UID: "d82cb0e3-c408-45fc-b05a-136c604cfe89"). InnerVolumeSpecName "kube-api-access-k5hrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.199739 4844 generic.go:334] "Generic (PLEG): container finished" podID="d82cb0e3-c408-45fc-b05a-136c604cfe89" containerID="cb3ce7cce6e951c7ab5be2df9aae273ca15e7fbc11fd7f1f42e635acbc2e5bbd" exitCode=0 Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.199782 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wwqbw" event={"ID":"d82cb0e3-c408-45fc-b05a-136c604cfe89","Type":"ContainerDied","Data":"cb3ce7cce6e951c7ab5be2df9aae273ca15e7fbc11fd7f1f42e635acbc2e5bbd"} Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.199810 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wwqbw" event={"ID":"d82cb0e3-c408-45fc-b05a-136c604cfe89","Type":"ContainerDied","Data":"2c9a777279c76a0531277981874e86abe0446d735530f9d783c28808a0e37a3e"} Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.199833 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wwqbw" Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.199851 4844 scope.go:117] "RemoveContainer" containerID="cb3ce7cce6e951c7ab5be2df9aae273ca15e7fbc11fd7f1f42e635acbc2e5bbd" Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.234376 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d82cb0e3-c408-45fc-b05a-136c604cfe89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d82cb0e3-c408-45fc-b05a-136c604cfe89" (UID: "d82cb0e3-c408-45fc-b05a-136c604cfe89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.262303 4844 scope.go:117] "RemoveContainer" containerID="811fe489b7078dced46d1951b581011867225a38623e6d84d1ae5bab627afeed" Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.264272 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d82cb0e3-c408-45fc-b05a-136c604cfe89-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.264327 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d82cb0e3-c408-45fc-b05a-136c604cfe89-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.264347 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5hrl\" (UniqueName: \"kubernetes.io/projected/d82cb0e3-c408-45fc-b05a-136c604cfe89-kube-api-access-k5hrl\") on node \"crc\" DevicePath \"\"" Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.288371 4844 scope.go:117] "RemoveContainer" containerID="d7bd8797b1ffe3f4a568b811d08dd9c7456d40f6a2131b90e27b95f8079b586b" Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.353466 4844 scope.go:117] "RemoveContainer" containerID="cb3ce7cce6e951c7ab5be2df9aae273ca15e7fbc11fd7f1f42e635acbc2e5bbd" Jan 26 14:00:49 crc kubenswrapper[4844]: E0126 14:00:49.353980 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb3ce7cce6e951c7ab5be2df9aae273ca15e7fbc11fd7f1f42e635acbc2e5bbd\": container with ID starting with cb3ce7cce6e951c7ab5be2df9aae273ca15e7fbc11fd7f1f42e635acbc2e5bbd not found: ID does not exist" containerID="cb3ce7cce6e951c7ab5be2df9aae273ca15e7fbc11fd7f1f42e635acbc2e5bbd" Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.354010 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb3ce7cce6e951c7ab5be2df9aae273ca15e7fbc11fd7f1f42e635acbc2e5bbd"} err="failed to get container status \"cb3ce7cce6e951c7ab5be2df9aae273ca15e7fbc11fd7f1f42e635acbc2e5bbd\": rpc error: code = NotFound desc = could not find container \"cb3ce7cce6e951c7ab5be2df9aae273ca15e7fbc11fd7f1f42e635acbc2e5bbd\": container with ID starting with cb3ce7cce6e951c7ab5be2df9aae273ca15e7fbc11fd7f1f42e635acbc2e5bbd not found: ID does not exist" Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.354031 4844 scope.go:117] "RemoveContainer" containerID="811fe489b7078dced46d1951b581011867225a38623e6d84d1ae5bab627afeed" Jan 26 14:00:49 crc kubenswrapper[4844]: E0126 14:00:49.354468 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"811fe489b7078dced46d1951b581011867225a38623e6d84d1ae5bab627afeed\": container with ID starting with 811fe489b7078dced46d1951b581011867225a38623e6d84d1ae5bab627afeed not found: ID does not exist" containerID="811fe489b7078dced46d1951b581011867225a38623e6d84d1ae5bab627afeed" Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.354494 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"811fe489b7078dced46d1951b581011867225a38623e6d84d1ae5bab627afeed"} err="failed to get container status \"811fe489b7078dced46d1951b581011867225a38623e6d84d1ae5bab627afeed\": rpc error: code = NotFound desc = could not find container \"811fe489b7078dced46d1951b581011867225a38623e6d84d1ae5bab627afeed\": container with ID starting with 811fe489b7078dced46d1951b581011867225a38623e6d84d1ae5bab627afeed not found: ID does not exist" Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.354511 4844 scope.go:117] "RemoveContainer" containerID="d7bd8797b1ffe3f4a568b811d08dd9c7456d40f6a2131b90e27b95f8079b586b" Jan 26 14:00:49 crc kubenswrapper[4844]: E0126 14:00:49.354807 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7bd8797b1ffe3f4a568b811d08dd9c7456d40f6a2131b90e27b95f8079b586b\": container with ID starting with d7bd8797b1ffe3f4a568b811d08dd9c7456d40f6a2131b90e27b95f8079b586b not found: ID does not exist" containerID="d7bd8797b1ffe3f4a568b811d08dd9c7456d40f6a2131b90e27b95f8079b586b" Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.354837 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7bd8797b1ffe3f4a568b811d08dd9c7456d40f6a2131b90e27b95f8079b586b"} err="failed to get container status \"d7bd8797b1ffe3f4a568b811d08dd9c7456d40f6a2131b90e27b95f8079b586b\": rpc error: code = NotFound desc = could not find container \"d7bd8797b1ffe3f4a568b811d08dd9c7456d40f6a2131b90e27b95f8079b586b\": container with ID starting with d7bd8797b1ffe3f4a568b811d08dd9c7456d40f6a2131b90e27b95f8079b586b not found: ID does not exist" Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.526369 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wwqbw"] Jan 26 14:00:49 crc kubenswrapper[4844]: I0126 14:00:49.538281 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wwqbw"] Jan 26 14:00:50 crc kubenswrapper[4844]: I0126 14:00:50.314281 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 14:00:51 crc kubenswrapper[4844]: I0126 14:00:51.218760 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"2f4a911f1f761e1468d052d2cb5b37a36b219a2592a90d5515a67fef38286501"} Jan 26 14:00:51 crc kubenswrapper[4844]: I0126 14:00:51.331810 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d82cb0e3-c408-45fc-b05a-136c604cfe89" path="/var/lib/kubelet/pods/d82cb0e3-c408-45fc-b05a-136c604cfe89/volumes" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.170375 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490601-dfzsv"] Jan 26 14:01:00 crc kubenswrapper[4844]: E0126 14:01:00.172426 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82cb0e3-c408-45fc-b05a-136c604cfe89" containerName="extract-utilities" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.172442 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82cb0e3-c408-45fc-b05a-136c604cfe89" containerName="extract-utilities" Jan 26 14:01:00 crc kubenswrapper[4844]: E0126 14:01:00.172468 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82cb0e3-c408-45fc-b05a-136c604cfe89" containerName="extract-content" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.172480 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82cb0e3-c408-45fc-b05a-136c604cfe89" containerName="extract-content" Jan 26 14:01:00 crc kubenswrapper[4844]: E0126 14:01:00.172533 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82cb0e3-c408-45fc-b05a-136c604cfe89" containerName="registry-server" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.172545 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82cb0e3-c408-45fc-b05a-136c604cfe89" containerName="registry-server" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.172810 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="d82cb0e3-c408-45fc-b05a-136c604cfe89" containerName="registry-server" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.173751 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490601-dfzsv" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.183453 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490601-dfzsv"] Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.236190 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9884c612-5868-41be-9d56-ad8f55bc68d6-config-data\") pod \"keystone-cron-29490601-dfzsv\" (UID: \"9884c612-5868-41be-9d56-ad8f55bc68d6\") " pod="openstack/keystone-cron-29490601-dfzsv" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.236247 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smmtq\" (UniqueName: \"kubernetes.io/projected/9884c612-5868-41be-9d56-ad8f55bc68d6-kube-api-access-smmtq\") pod \"keystone-cron-29490601-dfzsv\" (UID: \"9884c612-5868-41be-9d56-ad8f55bc68d6\") " pod="openstack/keystone-cron-29490601-dfzsv" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.236272 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9884c612-5868-41be-9d56-ad8f55bc68d6-combined-ca-bundle\") pod \"keystone-cron-29490601-dfzsv\" (UID: \"9884c612-5868-41be-9d56-ad8f55bc68d6\") " pod="openstack/keystone-cron-29490601-dfzsv" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.236321 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9884c612-5868-41be-9d56-ad8f55bc68d6-fernet-keys\") pod \"keystone-cron-29490601-dfzsv\" (UID: \"9884c612-5868-41be-9d56-ad8f55bc68d6\") " pod="openstack/keystone-cron-29490601-dfzsv" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.338233 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9884c612-5868-41be-9d56-ad8f55bc68d6-config-data\") pod \"keystone-cron-29490601-dfzsv\" (UID: \"9884c612-5868-41be-9d56-ad8f55bc68d6\") " pod="openstack/keystone-cron-29490601-dfzsv" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.338301 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smmtq\" (UniqueName: \"kubernetes.io/projected/9884c612-5868-41be-9d56-ad8f55bc68d6-kube-api-access-smmtq\") pod \"keystone-cron-29490601-dfzsv\" (UID: \"9884c612-5868-41be-9d56-ad8f55bc68d6\") " pod="openstack/keystone-cron-29490601-dfzsv" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.338331 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9884c612-5868-41be-9d56-ad8f55bc68d6-combined-ca-bundle\") pod \"keystone-cron-29490601-dfzsv\" (UID: \"9884c612-5868-41be-9d56-ad8f55bc68d6\") " pod="openstack/keystone-cron-29490601-dfzsv" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.338399 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9884c612-5868-41be-9d56-ad8f55bc68d6-fernet-keys\") pod \"keystone-cron-29490601-dfzsv\" (UID: \"9884c612-5868-41be-9d56-ad8f55bc68d6\") " pod="openstack/keystone-cron-29490601-dfzsv" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.346279 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9884c612-5868-41be-9d56-ad8f55bc68d6-combined-ca-bundle\") pod \"keystone-cron-29490601-dfzsv\" (UID: \"9884c612-5868-41be-9d56-ad8f55bc68d6\") " pod="openstack/keystone-cron-29490601-dfzsv" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.348452 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9884c612-5868-41be-9d56-ad8f55bc68d6-fernet-keys\") pod \"keystone-cron-29490601-dfzsv\" (UID: \"9884c612-5868-41be-9d56-ad8f55bc68d6\") " pod="openstack/keystone-cron-29490601-dfzsv" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.349238 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9884c612-5868-41be-9d56-ad8f55bc68d6-config-data\") pod \"keystone-cron-29490601-dfzsv\" (UID: \"9884c612-5868-41be-9d56-ad8f55bc68d6\") " pod="openstack/keystone-cron-29490601-dfzsv" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.364393 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smmtq\" (UniqueName: \"kubernetes.io/projected/9884c612-5868-41be-9d56-ad8f55bc68d6-kube-api-access-smmtq\") pod \"keystone-cron-29490601-dfzsv\" (UID: \"9884c612-5868-41be-9d56-ad8f55bc68d6\") " pod="openstack/keystone-cron-29490601-dfzsv" Jan 26 14:01:00 crc kubenswrapper[4844]: I0126 14:01:00.496454 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490601-dfzsv" Jan 26 14:01:01 crc kubenswrapper[4844]: I0126 14:01:01.001931 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490601-dfzsv"] Jan 26 14:01:01 crc kubenswrapper[4844]: I0126 14:01:01.325975 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490601-dfzsv" event={"ID":"9884c612-5868-41be-9d56-ad8f55bc68d6","Type":"ContainerStarted","Data":"eeea31ca45738d3fe9d2bab0d5c0777fc4fb3cc9d98219ad6352508b36888ce2"} Jan 26 14:01:02 crc kubenswrapper[4844]: I0126 14:01:02.341010 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490601-dfzsv" event={"ID":"9884c612-5868-41be-9d56-ad8f55bc68d6","Type":"ContainerStarted","Data":"b8e4ddee73dec8faa1b07e813c5f94629627036e99b123c6019a239c3e7f043e"} Jan 26 14:01:02 crc kubenswrapper[4844]: I0126 14:01:02.365579 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490601-dfzsv" podStartSLOduration=2.365565056 podStartE2EDuration="2.365565056s" podCreationTimestamp="2026-01-26 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:01:02.363488086 +0000 UTC m=+4639.296855698" watchObservedRunningTime="2026-01-26 14:01:02.365565056 +0000 UTC m=+4639.298932668" Jan 26 14:01:05 crc kubenswrapper[4844]: I0126 14:01:05.386116 4844 generic.go:334] "Generic (PLEG): container finished" podID="9884c612-5868-41be-9d56-ad8f55bc68d6" containerID="b8e4ddee73dec8faa1b07e813c5f94629627036e99b123c6019a239c3e7f043e" exitCode=0 Jan 26 14:01:05 crc kubenswrapper[4844]: I0126 14:01:05.386209 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490601-dfzsv" event={"ID":"9884c612-5868-41be-9d56-ad8f55bc68d6","Type":"ContainerDied","Data":"b8e4ddee73dec8faa1b07e813c5f94629627036e99b123c6019a239c3e7f043e"} Jan 26 14:01:06 crc kubenswrapper[4844]: I0126 14:01:06.797320 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490601-dfzsv" Jan 26 14:01:06 crc kubenswrapper[4844]: I0126 14:01:06.893048 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9884c612-5868-41be-9d56-ad8f55bc68d6-fernet-keys\") pod \"9884c612-5868-41be-9d56-ad8f55bc68d6\" (UID: \"9884c612-5868-41be-9d56-ad8f55bc68d6\") " Jan 26 14:01:06 crc kubenswrapper[4844]: I0126 14:01:06.893141 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9884c612-5868-41be-9d56-ad8f55bc68d6-combined-ca-bundle\") pod \"9884c612-5868-41be-9d56-ad8f55bc68d6\" (UID: \"9884c612-5868-41be-9d56-ad8f55bc68d6\") " Jan 26 14:01:06 crc kubenswrapper[4844]: I0126 14:01:06.893164 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9884c612-5868-41be-9d56-ad8f55bc68d6-config-data\") pod \"9884c612-5868-41be-9d56-ad8f55bc68d6\" (UID: \"9884c612-5868-41be-9d56-ad8f55bc68d6\") " Jan 26 14:01:06 crc kubenswrapper[4844]: I0126 14:01:06.893280 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smmtq\" (UniqueName: \"kubernetes.io/projected/9884c612-5868-41be-9d56-ad8f55bc68d6-kube-api-access-smmtq\") pod \"9884c612-5868-41be-9d56-ad8f55bc68d6\" (UID: \"9884c612-5868-41be-9d56-ad8f55bc68d6\") " Jan 26 14:01:06 crc kubenswrapper[4844]: I0126 14:01:06.899297 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9884c612-5868-41be-9d56-ad8f55bc68d6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9884c612-5868-41be-9d56-ad8f55bc68d6" (UID: "9884c612-5868-41be-9d56-ad8f55bc68d6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:01:06 crc kubenswrapper[4844]: I0126 14:01:06.903890 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9884c612-5868-41be-9d56-ad8f55bc68d6-kube-api-access-smmtq" (OuterVolumeSpecName: "kube-api-access-smmtq") pod "9884c612-5868-41be-9d56-ad8f55bc68d6" (UID: "9884c612-5868-41be-9d56-ad8f55bc68d6"). InnerVolumeSpecName "kube-api-access-smmtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:01:06 crc kubenswrapper[4844]: I0126 14:01:06.932587 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9884c612-5868-41be-9d56-ad8f55bc68d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9884c612-5868-41be-9d56-ad8f55bc68d6" (UID: "9884c612-5868-41be-9d56-ad8f55bc68d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:01:06 crc kubenswrapper[4844]: I0126 14:01:06.962009 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9884c612-5868-41be-9d56-ad8f55bc68d6-config-data" (OuterVolumeSpecName: "config-data") pod "9884c612-5868-41be-9d56-ad8f55bc68d6" (UID: "9884c612-5868-41be-9d56-ad8f55bc68d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:01:06 crc kubenswrapper[4844]: I0126 14:01:06.995790 4844 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9884c612-5868-41be-9d56-ad8f55bc68d6-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 14:01:06 crc kubenswrapper[4844]: I0126 14:01:06.995841 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9884c612-5868-41be-9d56-ad8f55bc68d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:01:06 crc kubenswrapper[4844]: I0126 14:01:06.995857 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9884c612-5868-41be-9d56-ad8f55bc68d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:01:06 crc kubenswrapper[4844]: I0126 14:01:06.995869 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smmtq\" (UniqueName: \"kubernetes.io/projected/9884c612-5868-41be-9d56-ad8f55bc68d6-kube-api-access-smmtq\") on node \"crc\" DevicePath \"\"" Jan 26 14:01:07 crc kubenswrapper[4844]: I0126 14:01:07.414178 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490601-dfzsv" event={"ID":"9884c612-5868-41be-9d56-ad8f55bc68d6","Type":"ContainerDied","Data":"eeea31ca45738d3fe9d2bab0d5c0777fc4fb3cc9d98219ad6352508b36888ce2"} Jan 26 14:01:07 crc kubenswrapper[4844]: I0126 14:01:07.414494 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeea31ca45738d3fe9d2bab0d5c0777fc4fb3cc9d98219ad6352508b36888ce2" Jan 26 14:01:07 crc kubenswrapper[4844]: I0126 14:01:07.414304 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490601-dfzsv" Jan 26 14:01:47 crc kubenswrapper[4844]: I0126 14:01:47.705259 4844 patch_prober.go:28] interesting pod/oauth-openshift-846967c997-7njvr container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 14:01:47 crc kubenswrapper[4844]: I0126 14:01:47.706057 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-846967c997-7njvr" podUID="f41db1b2-c62c-40a0-b86c-a6284a9351fa" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 14:03:06 crc kubenswrapper[4844]: I0126 14:03:06.365298 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:03:06 crc kubenswrapper[4844]: I0126 14:03:06.366047 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:03:36 crc kubenswrapper[4844]: I0126 14:03:36.364663 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:03:36 crc kubenswrapper[4844]: I0126 14:03:36.365399 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:04:06 crc kubenswrapper[4844]: I0126 14:04:06.365873 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:04:06 crc kubenswrapper[4844]: I0126 14:04:06.366624 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:04:06 crc kubenswrapper[4844]: I0126 14:04:06.366705 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 14:04:06 crc kubenswrapper[4844]: I0126 14:04:06.367919 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2f4a911f1f761e1468d052d2cb5b37a36b219a2592a90d5515a67fef38286501"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:04:06 crc kubenswrapper[4844]: I0126 14:04:06.368024 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://2f4a911f1f761e1468d052d2cb5b37a36b219a2592a90d5515a67fef38286501" gracePeriod=600 Jan 26 14:04:06 crc kubenswrapper[4844]: E0126 14:04:06.580179 4844 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3602fc7_397b_4d73_ab0c_45acc047397b.slice/crio-conmon-2f4a911f1f761e1468d052d2cb5b37a36b219a2592a90d5515a67fef38286501.scope\": RecentStats: unable to find data in memory cache]" Jan 26 14:04:07 crc kubenswrapper[4844]: I0126 14:04:07.483846 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="2f4a911f1f761e1468d052d2cb5b37a36b219a2592a90d5515a67fef38286501" exitCode=0 Jan 26 14:04:07 crc kubenswrapper[4844]: I0126 14:04:07.483920 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"2f4a911f1f761e1468d052d2cb5b37a36b219a2592a90d5515a67fef38286501"} Jan 26 14:04:07 crc kubenswrapper[4844]: I0126 14:04:07.484364 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b"} Jan 26 14:04:07 crc kubenswrapper[4844]: I0126 14:04:07.484390 4844 scope.go:117] "RemoveContainer" containerID="184dc4077a3fed0dffa1c3bc6a50ea72ba5630ebfbd99d1aa847592dcc5aeb7b" Jan 26 14:05:25 crc kubenswrapper[4844]: I0126 14:05:25.757920 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-2d462" podUID="9baf25b3-6096-4215-9455-b9126c02ffcf" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 26 14:06:06 crc kubenswrapper[4844]: I0126 14:06:06.365303 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:06:06 crc kubenswrapper[4844]: I0126 14:06:06.365860 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:06:24 crc kubenswrapper[4844]: I0126 14:06:24.256218 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-85jjx"] Jan 26 14:06:24 crc kubenswrapper[4844]: E0126 14:06:24.256977 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9884c612-5868-41be-9d56-ad8f55bc68d6" containerName="keystone-cron" Jan 26 14:06:24 crc kubenswrapper[4844]: I0126 14:06:24.256989 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="9884c612-5868-41be-9d56-ad8f55bc68d6" containerName="keystone-cron" Jan 26 14:06:24 crc kubenswrapper[4844]: I0126 14:06:24.257195 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="9884c612-5868-41be-9d56-ad8f55bc68d6" containerName="keystone-cron" Jan 26 14:06:24 crc kubenswrapper[4844]: I0126 14:06:24.258491 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85jjx" Jan 26 14:06:24 crc kubenswrapper[4844]: I0126 14:06:24.274793 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-85jjx"] Jan 26 14:06:24 crc kubenswrapper[4844]: I0126 14:06:24.315014 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd68w\" (UniqueName: \"kubernetes.io/projected/8672b685-f50f-4bd4-a837-114adf224892-kube-api-access-jd68w\") pod \"redhat-operators-85jjx\" (UID: \"8672b685-f50f-4bd4-a837-114adf224892\") " pod="openshift-marketplace/redhat-operators-85jjx" Jan 26 14:06:24 crc kubenswrapper[4844]: I0126 14:06:24.315068 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8672b685-f50f-4bd4-a837-114adf224892-utilities\") pod \"redhat-operators-85jjx\" (UID: \"8672b685-f50f-4bd4-a837-114adf224892\") " pod="openshift-marketplace/redhat-operators-85jjx" Jan 26 14:06:24 crc kubenswrapper[4844]: I0126 14:06:24.315239 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8672b685-f50f-4bd4-a837-114adf224892-catalog-content\") pod \"redhat-operators-85jjx\" (UID: \"8672b685-f50f-4bd4-a837-114adf224892\") " pod="openshift-marketplace/redhat-operators-85jjx" Jan 26 14:06:24 crc kubenswrapper[4844]: I0126 14:06:24.417475 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8672b685-f50f-4bd4-a837-114adf224892-catalog-content\") pod \"redhat-operators-85jjx\" (UID: \"8672b685-f50f-4bd4-a837-114adf224892\") " pod="openshift-marketplace/redhat-operators-85jjx" Jan 26 14:06:24 crc kubenswrapper[4844]: I0126 14:06:24.417609 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd68w\" (UniqueName: \"kubernetes.io/projected/8672b685-f50f-4bd4-a837-114adf224892-kube-api-access-jd68w\") pod \"redhat-operators-85jjx\" (UID: \"8672b685-f50f-4bd4-a837-114adf224892\") " pod="openshift-marketplace/redhat-operators-85jjx" Jan 26 14:06:24 crc kubenswrapper[4844]: I0126 14:06:24.417644 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8672b685-f50f-4bd4-a837-114adf224892-utilities\") pod \"redhat-operators-85jjx\" (UID: \"8672b685-f50f-4bd4-a837-114adf224892\") " pod="openshift-marketplace/redhat-operators-85jjx" Jan 26 14:06:24 crc kubenswrapper[4844]: I0126 14:06:24.418159 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8672b685-f50f-4bd4-a837-114adf224892-utilities\") pod \"redhat-operators-85jjx\" (UID: \"8672b685-f50f-4bd4-a837-114adf224892\") " pod="openshift-marketplace/redhat-operators-85jjx" Jan 26 14:06:24 crc kubenswrapper[4844]: I0126 14:06:24.418204 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8672b685-f50f-4bd4-a837-114adf224892-catalog-content\") pod \"redhat-operators-85jjx\" (UID: \"8672b685-f50f-4bd4-a837-114adf224892\") " pod="openshift-marketplace/redhat-operators-85jjx" Jan 26 14:06:24 crc kubenswrapper[4844]: I0126 14:06:24.444541 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd68w\" (UniqueName: \"kubernetes.io/projected/8672b685-f50f-4bd4-a837-114adf224892-kube-api-access-jd68w\") pod \"redhat-operators-85jjx\" (UID: \"8672b685-f50f-4bd4-a837-114adf224892\") " pod="openshift-marketplace/redhat-operators-85jjx" Jan 26 14:06:24 crc kubenswrapper[4844]: I0126 14:06:24.583035 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85jjx" Jan 26 14:06:25 crc kubenswrapper[4844]: I0126 14:06:25.067656 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-85jjx"] Jan 26 14:06:26 crc kubenswrapper[4844]: I0126 14:06:26.011760 4844 generic.go:334] "Generic (PLEG): container finished" podID="8672b685-f50f-4bd4-a837-114adf224892" containerID="bb5054d88dc743518c5bd6af56072b48214dba9d5b3ece7e5e8731210f414822" exitCode=0 Jan 26 14:06:26 crc kubenswrapper[4844]: I0126 14:06:26.011829 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85jjx" event={"ID":"8672b685-f50f-4bd4-a837-114adf224892","Type":"ContainerDied","Data":"bb5054d88dc743518c5bd6af56072b48214dba9d5b3ece7e5e8731210f414822"} Jan 26 14:06:26 crc kubenswrapper[4844]: I0126 14:06:26.011896 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85jjx" event={"ID":"8672b685-f50f-4bd4-a837-114adf224892","Type":"ContainerStarted","Data":"1cd9b1aa2b366144850bde6173456e43fefaebd530da6fcd9518ad44bd36dbdf"} Jan 26 14:06:26 crc kubenswrapper[4844]: I0126 14:06:26.014620 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:06:28 crc kubenswrapper[4844]: I0126 14:06:28.038842 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85jjx" event={"ID":"8672b685-f50f-4bd4-a837-114adf224892","Type":"ContainerStarted","Data":"429d323003d9f22376bd0a17e82bb5b2e3ab82d383f693cfef79319e42bb83d3"} Jan 26 14:06:30 crc kubenswrapper[4844]: I0126 14:06:30.066708 4844 generic.go:334] "Generic (PLEG): container finished" podID="8672b685-f50f-4bd4-a837-114adf224892" containerID="429d323003d9f22376bd0a17e82bb5b2e3ab82d383f693cfef79319e42bb83d3" exitCode=0 Jan 26 14:06:30 crc kubenswrapper[4844]: I0126 14:06:30.066835 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85jjx" event={"ID":"8672b685-f50f-4bd4-a837-114adf224892","Type":"ContainerDied","Data":"429d323003d9f22376bd0a17e82bb5b2e3ab82d383f693cfef79319e42bb83d3"} Jan 26 14:06:32 crc kubenswrapper[4844]: I0126 14:06:32.090616 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85jjx" event={"ID":"8672b685-f50f-4bd4-a837-114adf224892","Type":"ContainerStarted","Data":"47c8263a54fb20886950cec02e948bb0b3ecb14b7e053af0cabdda47c7be0ce2"} Jan 26 14:06:32 crc kubenswrapper[4844]: I0126 14:06:32.119351 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-85jjx" podStartSLOduration=3.563265505 podStartE2EDuration="8.119330855s" podCreationTimestamp="2026-01-26 14:06:24 +0000 UTC" firstStartedPulling="2026-01-26 14:06:26.014348522 +0000 UTC m=+4962.947716144" lastFinishedPulling="2026-01-26 14:06:30.570413872 +0000 UTC m=+4967.503781494" observedRunningTime="2026-01-26 14:06:32.112677854 +0000 UTC m=+4969.046045496" watchObservedRunningTime="2026-01-26 14:06:32.119330855 +0000 UTC m=+4969.052698467" Jan 26 14:06:34 crc kubenswrapper[4844]: I0126 14:06:34.584301 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-85jjx" Jan 26 14:06:34 crc kubenswrapper[4844]: I0126 14:06:34.585071 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-85jjx" Jan 26 14:06:35 crc kubenswrapper[4844]: I0126 14:06:35.642776 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-85jjx" podUID="8672b685-f50f-4bd4-a837-114adf224892" containerName="registry-server" probeResult="failure" output=< Jan 26 14:06:35 crc kubenswrapper[4844]: timeout: failed to connect service ":50051" within 1s Jan 26 14:06:35 crc kubenswrapper[4844]: > Jan 26 14:06:36 crc kubenswrapper[4844]: I0126 14:06:36.368082 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:06:36 crc kubenswrapper[4844]: I0126 14:06:36.368457 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:06:44 crc kubenswrapper[4844]: I0126 14:06:44.736874 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-85jjx" Jan 26 14:06:44 crc kubenswrapper[4844]: I0126 14:06:44.817165 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-85jjx" Jan 26 14:06:45 crc kubenswrapper[4844]: I0126 14:06:45.007739 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-85jjx"] Jan 26 14:06:46 crc kubenswrapper[4844]: I0126 14:06:46.238120 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-85jjx" podUID="8672b685-f50f-4bd4-a837-114adf224892" containerName="registry-server" containerID="cri-o://47c8263a54fb20886950cec02e948bb0b3ecb14b7e053af0cabdda47c7be0ce2" gracePeriod=2 Jan 26 14:06:47 crc kubenswrapper[4844]: I0126 14:06:47.251573 4844 generic.go:334] "Generic (PLEG): container finished" podID="8672b685-f50f-4bd4-a837-114adf224892" containerID="47c8263a54fb20886950cec02e948bb0b3ecb14b7e053af0cabdda47c7be0ce2" exitCode=0 Jan 26 14:06:47 crc kubenswrapper[4844]: I0126 14:06:47.251767 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85jjx" event={"ID":"8672b685-f50f-4bd4-a837-114adf224892","Type":"ContainerDied","Data":"47c8263a54fb20886950cec02e948bb0b3ecb14b7e053af0cabdda47c7be0ce2"} Jan 26 14:06:48 crc kubenswrapper[4844]: I0126 14:06:48.083396 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85jjx" Jan 26 14:06:48 crc kubenswrapper[4844]: I0126 14:06:48.244509 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8672b685-f50f-4bd4-a837-114adf224892-utilities\") pod \"8672b685-f50f-4bd4-a837-114adf224892\" (UID: \"8672b685-f50f-4bd4-a837-114adf224892\") " Jan 26 14:06:48 crc kubenswrapper[4844]: I0126 14:06:48.244640 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8672b685-f50f-4bd4-a837-114adf224892-catalog-content\") pod \"8672b685-f50f-4bd4-a837-114adf224892\" (UID: \"8672b685-f50f-4bd4-a837-114adf224892\") " Jan 26 14:06:48 crc kubenswrapper[4844]: I0126 14:06:48.244691 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd68w\" (UniqueName: \"kubernetes.io/projected/8672b685-f50f-4bd4-a837-114adf224892-kube-api-access-jd68w\") pod \"8672b685-f50f-4bd4-a837-114adf224892\" (UID: \"8672b685-f50f-4bd4-a837-114adf224892\") " Jan 26 14:06:48 crc kubenswrapper[4844]: I0126 14:06:48.245358 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8672b685-f50f-4bd4-a837-114adf224892-utilities" (OuterVolumeSpecName: "utilities") pod "8672b685-f50f-4bd4-a837-114adf224892" (UID: "8672b685-f50f-4bd4-a837-114adf224892"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:06:48 crc kubenswrapper[4844]: I0126 14:06:48.267068 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85jjx" event={"ID":"8672b685-f50f-4bd4-a837-114adf224892","Type":"ContainerDied","Data":"1cd9b1aa2b366144850bde6173456e43fefaebd530da6fcd9518ad44bd36dbdf"} Jan 26 14:06:48 crc kubenswrapper[4844]: I0126 14:06:48.267137 4844 scope.go:117] "RemoveContainer" containerID="47c8263a54fb20886950cec02e948bb0b3ecb14b7e053af0cabdda47c7be0ce2" Jan 26 14:06:48 crc kubenswrapper[4844]: I0126 14:06:48.267171 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85jjx" Jan 26 14:06:48 crc kubenswrapper[4844]: I0126 14:06:48.268894 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8672b685-f50f-4bd4-a837-114adf224892-kube-api-access-jd68w" (OuterVolumeSpecName: "kube-api-access-jd68w") pod "8672b685-f50f-4bd4-a837-114adf224892" (UID: "8672b685-f50f-4bd4-a837-114adf224892"). InnerVolumeSpecName "kube-api-access-jd68w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:06:48 crc kubenswrapper[4844]: I0126 14:06:48.348460 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8672b685-f50f-4bd4-a837-114adf224892-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:06:48 crc kubenswrapper[4844]: I0126 14:06:48.348498 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd68w\" (UniqueName: \"kubernetes.io/projected/8672b685-f50f-4bd4-a837-114adf224892-kube-api-access-jd68w\") on node \"crc\" DevicePath \"\"" Jan 26 14:06:48 crc kubenswrapper[4844]: I0126 14:06:48.350692 4844 scope.go:117] "RemoveContainer" containerID="429d323003d9f22376bd0a17e82bb5b2e3ab82d383f693cfef79319e42bb83d3" Jan 26 14:06:48 crc kubenswrapper[4844]: I0126 14:06:48.381778 4844 scope.go:117] "RemoveContainer" containerID="bb5054d88dc743518c5bd6af56072b48214dba9d5b3ece7e5e8731210f414822" Jan 26 14:06:48 crc kubenswrapper[4844]: I0126 14:06:48.387479 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8672b685-f50f-4bd4-a837-114adf224892-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8672b685-f50f-4bd4-a837-114adf224892" (UID: "8672b685-f50f-4bd4-a837-114adf224892"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:06:48 crc kubenswrapper[4844]: I0126 14:06:48.450975 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8672b685-f50f-4bd4-a837-114adf224892-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:06:48 crc kubenswrapper[4844]: I0126 14:06:48.611052 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-85jjx"] Jan 26 14:06:48 crc kubenswrapper[4844]: I0126 14:06:48.621169 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-85jjx"] Jan 26 14:06:49 crc kubenswrapper[4844]: I0126 14:06:49.331690 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8672b685-f50f-4bd4-a837-114adf224892" path="/var/lib/kubelet/pods/8672b685-f50f-4bd4-a837-114adf224892/volumes" Jan 26 14:07:06 crc kubenswrapper[4844]: I0126 14:07:06.365722 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:07:06 crc kubenswrapper[4844]: I0126 14:07:06.366513 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:07:06 crc kubenswrapper[4844]: I0126 14:07:06.366589 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 14:07:06 crc kubenswrapper[4844]: I0126 14:07:06.368221 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:07:06 crc kubenswrapper[4844]: I0126 14:07:06.368349 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" gracePeriod=600 Jan 26 14:07:06 crc kubenswrapper[4844]: E0126 14:07:06.499358 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:07:07 crc kubenswrapper[4844]: I0126 14:07:07.486950 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" exitCode=0 Jan 26 14:07:07 crc kubenswrapper[4844]: I0126 14:07:07.487012 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b"} Jan 26 14:07:07 crc kubenswrapper[4844]: I0126 14:07:07.487784 4844 scope.go:117] "RemoveContainer" containerID="2f4a911f1f761e1468d052d2cb5b37a36b219a2592a90d5515a67fef38286501" Jan 26 14:07:07 crc kubenswrapper[4844]: I0126 14:07:07.488402 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:07:07 crc kubenswrapper[4844]: E0126 14:07:07.488733 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:07:13 crc kubenswrapper[4844]: I0126 14:07:13.757585 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f80a52fc-df6a-4218-913e-2ee03174e341" containerName="galera" probeResult="failure" output="command timed out" Jan 26 14:07:13 crc kubenswrapper[4844]: I0126 14:07:13.757585 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="f80a52fc-df6a-4218-913e-2ee03174e341" containerName="galera" probeResult="failure" output="command timed out" Jan 26 14:07:21 crc kubenswrapper[4844]: I0126 14:07:21.313816 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:07:21 crc kubenswrapper[4844]: E0126 14:07:21.314405 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:07:33 crc kubenswrapper[4844]: I0126 14:07:33.332495 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:07:33 crc kubenswrapper[4844]: E0126 14:07:33.333587 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:07:46 crc kubenswrapper[4844]: I0126 14:07:46.313358 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:07:46 crc kubenswrapper[4844]: E0126 14:07:46.314280 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:07:59 crc kubenswrapper[4844]: I0126 14:07:59.313513 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:07:59 crc kubenswrapper[4844]: E0126 14:07:59.314483 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:08:11 crc kubenswrapper[4844]: I0126 14:08:11.314183 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:08:11 crc kubenswrapper[4844]: E0126 14:08:11.315190 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:08:22 crc kubenswrapper[4844]: I0126 14:08:22.313974 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:08:22 crc kubenswrapper[4844]: E0126 14:08:22.314941 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:08:34 crc kubenswrapper[4844]: I0126 14:08:34.313992 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:08:34 crc kubenswrapper[4844]: E0126 14:08:34.315118 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:08:48 crc kubenswrapper[4844]: I0126 14:08:48.313247 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:08:48 crc kubenswrapper[4844]: E0126 14:08:48.313982 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:09:02 crc kubenswrapper[4844]: I0126 14:09:02.313885 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:09:02 crc kubenswrapper[4844]: E0126 14:09:02.314817 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:09:15 crc kubenswrapper[4844]: I0126 14:09:15.315132 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:09:15 crc kubenswrapper[4844]: E0126 14:09:15.316870 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:09:27 crc kubenswrapper[4844]: I0126 14:09:27.313315 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:09:27 crc kubenswrapper[4844]: E0126 14:09:27.314272 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:09:34 crc kubenswrapper[4844]: I0126 14:09:34.059823 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2plmb"] Jan 26 14:09:34 crc kubenswrapper[4844]: E0126 14:09:34.060970 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8672b685-f50f-4bd4-a837-114adf224892" containerName="extract-content" Jan 26 14:09:34 crc kubenswrapper[4844]: I0126 14:09:34.060990 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="8672b685-f50f-4bd4-a837-114adf224892" containerName="extract-content" Jan 26 14:09:34 crc kubenswrapper[4844]: E0126 14:09:34.061018 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8672b685-f50f-4bd4-a837-114adf224892" containerName="extract-utilities" Jan 26 14:09:34 crc kubenswrapper[4844]: I0126 14:09:34.061029 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="8672b685-f50f-4bd4-a837-114adf224892" containerName="extract-utilities" Jan 26 14:09:34 crc kubenswrapper[4844]: E0126 14:09:34.061069 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8672b685-f50f-4bd4-a837-114adf224892" containerName="registry-server" Jan 26 14:09:34 crc kubenswrapper[4844]: I0126 14:09:34.061078 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="8672b685-f50f-4bd4-a837-114adf224892" containerName="registry-server" Jan 26 14:09:34 crc kubenswrapper[4844]: I0126 14:09:34.061361 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="8672b685-f50f-4bd4-a837-114adf224892" containerName="registry-server" Jan 26 14:09:34 crc kubenswrapper[4844]: I0126 14:09:34.063216 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2plmb" Jan 26 14:09:34 crc kubenswrapper[4844]: I0126 14:09:34.079313 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2plmb"] Jan 26 14:09:34 crc kubenswrapper[4844]: I0126 14:09:34.195286 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fed2ad98-767c-4c8d-9317-a771d72617cf-catalog-content\") pod \"redhat-marketplace-2plmb\" (UID: \"fed2ad98-767c-4c8d-9317-a771d72617cf\") " pod="openshift-marketplace/redhat-marketplace-2plmb" Jan 26 14:09:34 crc kubenswrapper[4844]: I0126 14:09:34.195385 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fed2ad98-767c-4c8d-9317-a771d72617cf-utilities\") pod \"redhat-marketplace-2plmb\" (UID: \"fed2ad98-767c-4c8d-9317-a771d72617cf\") " pod="openshift-marketplace/redhat-marketplace-2plmb" Jan 26 14:09:34 crc kubenswrapper[4844]: I0126 14:09:34.195416 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n4wk\" (UniqueName: \"kubernetes.io/projected/fed2ad98-767c-4c8d-9317-a771d72617cf-kube-api-access-2n4wk\") pod \"redhat-marketplace-2plmb\" (UID: \"fed2ad98-767c-4c8d-9317-a771d72617cf\") " pod="openshift-marketplace/redhat-marketplace-2plmb" Jan 26 14:09:34 crc kubenswrapper[4844]: I0126 14:09:34.297457 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fed2ad98-767c-4c8d-9317-a771d72617cf-utilities\") pod \"redhat-marketplace-2plmb\" (UID: \"fed2ad98-767c-4c8d-9317-a771d72617cf\") " pod="openshift-marketplace/redhat-marketplace-2plmb" Jan 26 14:09:34 crc kubenswrapper[4844]: I0126 14:09:34.297518 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n4wk\" (UniqueName: \"kubernetes.io/projected/fed2ad98-767c-4c8d-9317-a771d72617cf-kube-api-access-2n4wk\") pod \"redhat-marketplace-2plmb\" (UID: \"fed2ad98-767c-4c8d-9317-a771d72617cf\") " pod="openshift-marketplace/redhat-marketplace-2plmb" Jan 26 14:09:34 crc kubenswrapper[4844]: I0126 14:09:34.297681 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fed2ad98-767c-4c8d-9317-a771d72617cf-catalog-content\") pod \"redhat-marketplace-2plmb\" (UID: \"fed2ad98-767c-4c8d-9317-a771d72617cf\") " pod="openshift-marketplace/redhat-marketplace-2plmb" Jan 26 14:09:34 crc kubenswrapper[4844]: I0126 14:09:34.298082 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fed2ad98-767c-4c8d-9317-a771d72617cf-utilities\") pod \"redhat-marketplace-2plmb\" (UID: \"fed2ad98-767c-4c8d-9317-a771d72617cf\") " pod="openshift-marketplace/redhat-marketplace-2plmb" Jan 26 14:09:34 crc kubenswrapper[4844]: I0126 14:09:34.298277 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fed2ad98-767c-4c8d-9317-a771d72617cf-catalog-content\") pod \"redhat-marketplace-2plmb\" (UID: \"fed2ad98-767c-4c8d-9317-a771d72617cf\") " pod="openshift-marketplace/redhat-marketplace-2plmb" Jan 26 14:09:34 crc kubenswrapper[4844]: I0126 14:09:34.319025 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n4wk\" (UniqueName: \"kubernetes.io/projected/fed2ad98-767c-4c8d-9317-a771d72617cf-kube-api-access-2n4wk\") pod \"redhat-marketplace-2plmb\" (UID: \"fed2ad98-767c-4c8d-9317-a771d72617cf\") " pod="openshift-marketplace/redhat-marketplace-2plmb" Jan 26 14:09:34 crc kubenswrapper[4844]: I0126 14:09:34.390716 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2plmb" Jan 26 14:09:34 crc kubenswrapper[4844]: I0126 14:09:34.927761 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2plmb"] Jan 26 14:09:35 crc kubenswrapper[4844]: I0126 14:09:35.115476 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2plmb" event={"ID":"fed2ad98-767c-4c8d-9317-a771d72617cf","Type":"ContainerStarted","Data":"8b9b4da0ecf64a6120acb8b2ddf4e0b7448fb3e432ea1010097115e38cc3897d"} Jan 26 14:09:36 crc kubenswrapper[4844]: I0126 14:09:36.128115 4844 generic.go:334] "Generic (PLEG): container finished" podID="fed2ad98-767c-4c8d-9317-a771d72617cf" containerID="4046026fcb95fe2518442f8321c395f21bd4aebb2f64ae3184ad2be5ffc45ddf" exitCode=0 Jan 26 14:09:36 crc kubenswrapper[4844]: I0126 14:09:36.128199 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2plmb" event={"ID":"fed2ad98-767c-4c8d-9317-a771d72617cf","Type":"ContainerDied","Data":"4046026fcb95fe2518442f8321c395f21bd4aebb2f64ae3184ad2be5ffc45ddf"} Jan 26 14:09:37 crc kubenswrapper[4844]: I0126 14:09:37.140273 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2plmb" event={"ID":"fed2ad98-767c-4c8d-9317-a771d72617cf","Type":"ContainerStarted","Data":"7c25e359f3746cff8eaca3d5ea7551db67266a2509361eb338e5b67d0675c92d"} Jan 26 14:09:38 crc kubenswrapper[4844]: I0126 14:09:38.152978 4844 generic.go:334] "Generic (PLEG): container finished" podID="fed2ad98-767c-4c8d-9317-a771d72617cf" containerID="7c25e359f3746cff8eaca3d5ea7551db67266a2509361eb338e5b67d0675c92d" exitCode=0 Jan 26 14:09:38 crc kubenswrapper[4844]: I0126 14:09:38.153013 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2plmb" event={"ID":"fed2ad98-767c-4c8d-9317-a771d72617cf","Type":"ContainerDied","Data":"7c25e359f3746cff8eaca3d5ea7551db67266a2509361eb338e5b67d0675c92d"} Jan 26 14:09:39 crc kubenswrapper[4844]: I0126 14:09:39.313465 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:09:39 crc kubenswrapper[4844]: E0126 14:09:39.314018 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:09:40 crc kubenswrapper[4844]: I0126 14:09:40.172526 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2plmb" event={"ID":"fed2ad98-767c-4c8d-9317-a771d72617cf","Type":"ContainerStarted","Data":"b9b4e32dc9bf8449e00580c6ac36c2a8f0da9e3968758db00d8286aa1266012e"} Jan 26 14:09:40 crc kubenswrapper[4844]: I0126 14:09:40.196971 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2plmb" podStartSLOduration=3.655563171 podStartE2EDuration="6.196951919s" podCreationTimestamp="2026-01-26 14:09:34 +0000 UTC" firstStartedPulling="2026-01-26 14:09:36.131868238 +0000 UTC m=+5153.065235860" lastFinishedPulling="2026-01-26 14:09:38.673256996 +0000 UTC m=+5155.606624608" observedRunningTime="2026-01-26 14:09:40.189707112 +0000 UTC m=+5157.123074734" watchObservedRunningTime="2026-01-26 14:09:40.196951919 +0000 UTC m=+5157.130319531" Jan 26 14:09:44 crc kubenswrapper[4844]: I0126 14:09:44.390885 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2plmb" Jan 26 14:09:44 crc kubenswrapper[4844]: I0126 14:09:44.391532 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2plmb" Jan 26 14:09:44 crc kubenswrapper[4844]: I0126 14:09:44.449101 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2plmb" Jan 26 14:09:45 crc kubenswrapper[4844]: I0126 14:09:45.284394 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2plmb" Jan 26 14:09:47 crc kubenswrapper[4844]: I0126 14:09:47.685293 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2plmb"] Jan 26 14:09:47 crc kubenswrapper[4844]: I0126 14:09:47.686225 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2plmb" podUID="fed2ad98-767c-4c8d-9317-a771d72617cf" containerName="registry-server" containerID="cri-o://b9b4e32dc9bf8449e00580c6ac36c2a8f0da9e3968758db00d8286aa1266012e" gracePeriod=2 Jan 26 14:09:48 crc kubenswrapper[4844]: I0126 14:09:48.278803 4844 generic.go:334] "Generic (PLEG): container finished" podID="fed2ad98-767c-4c8d-9317-a771d72617cf" containerID="b9b4e32dc9bf8449e00580c6ac36c2a8f0da9e3968758db00d8286aa1266012e" exitCode=0 Jan 26 14:09:48 crc kubenswrapper[4844]: I0126 14:09:48.278857 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2plmb" event={"ID":"fed2ad98-767c-4c8d-9317-a771d72617cf","Type":"ContainerDied","Data":"b9b4e32dc9bf8449e00580c6ac36c2a8f0da9e3968758db00d8286aa1266012e"} Jan 26 14:09:48 crc kubenswrapper[4844]: I0126 14:09:48.636479 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2plmb" Jan 26 14:09:48 crc kubenswrapper[4844]: I0126 14:09:48.699156 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n4wk\" (UniqueName: \"kubernetes.io/projected/fed2ad98-767c-4c8d-9317-a771d72617cf-kube-api-access-2n4wk\") pod \"fed2ad98-767c-4c8d-9317-a771d72617cf\" (UID: \"fed2ad98-767c-4c8d-9317-a771d72617cf\") " Jan 26 14:09:48 crc kubenswrapper[4844]: I0126 14:09:48.699216 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fed2ad98-767c-4c8d-9317-a771d72617cf-catalog-content\") pod \"fed2ad98-767c-4c8d-9317-a771d72617cf\" (UID: \"fed2ad98-767c-4c8d-9317-a771d72617cf\") " Jan 26 14:09:48 crc kubenswrapper[4844]: I0126 14:09:48.699346 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fed2ad98-767c-4c8d-9317-a771d72617cf-utilities\") pod \"fed2ad98-767c-4c8d-9317-a771d72617cf\" (UID: \"fed2ad98-767c-4c8d-9317-a771d72617cf\") " Jan 26 14:09:48 crc kubenswrapper[4844]: I0126 14:09:48.700505 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fed2ad98-767c-4c8d-9317-a771d72617cf-utilities" (OuterVolumeSpecName: "utilities") pod "fed2ad98-767c-4c8d-9317-a771d72617cf" (UID: "fed2ad98-767c-4c8d-9317-a771d72617cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:09:48 crc kubenswrapper[4844]: I0126 14:09:48.706015 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fed2ad98-767c-4c8d-9317-a771d72617cf-kube-api-access-2n4wk" (OuterVolumeSpecName: "kube-api-access-2n4wk") pod "fed2ad98-767c-4c8d-9317-a771d72617cf" (UID: "fed2ad98-767c-4c8d-9317-a771d72617cf"). InnerVolumeSpecName "kube-api-access-2n4wk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:09:48 crc kubenswrapper[4844]: I0126 14:09:48.728152 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fed2ad98-767c-4c8d-9317-a771d72617cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fed2ad98-767c-4c8d-9317-a771d72617cf" (UID: "fed2ad98-767c-4c8d-9317-a771d72617cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:09:48 crc kubenswrapper[4844]: I0126 14:09:48.803534 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n4wk\" (UniqueName: \"kubernetes.io/projected/fed2ad98-767c-4c8d-9317-a771d72617cf-kube-api-access-2n4wk\") on node \"crc\" DevicePath \"\"" Jan 26 14:09:48 crc kubenswrapper[4844]: I0126 14:09:48.803564 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fed2ad98-767c-4c8d-9317-a771d72617cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:09:48 crc kubenswrapper[4844]: I0126 14:09:48.803574 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fed2ad98-767c-4c8d-9317-a771d72617cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:09:49 crc kubenswrapper[4844]: I0126 14:09:49.341267 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2plmb" Jan 26 14:09:49 crc kubenswrapper[4844]: I0126 14:09:49.342883 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2plmb" event={"ID":"fed2ad98-767c-4c8d-9317-a771d72617cf","Type":"ContainerDied","Data":"8b9b4da0ecf64a6120acb8b2ddf4e0b7448fb3e432ea1010097115e38cc3897d"} Jan 26 14:09:49 crc kubenswrapper[4844]: I0126 14:09:49.342948 4844 scope.go:117] "RemoveContainer" containerID="b9b4e32dc9bf8449e00580c6ac36c2a8f0da9e3968758db00d8286aa1266012e" Jan 26 14:09:49 crc kubenswrapper[4844]: I0126 14:09:49.372881 4844 scope.go:117] "RemoveContainer" containerID="7c25e359f3746cff8eaca3d5ea7551db67266a2509361eb338e5b67d0675c92d" Jan 26 14:09:49 crc kubenswrapper[4844]: I0126 14:09:49.401330 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2plmb"] Jan 26 14:09:49 crc kubenswrapper[4844]: I0126 14:09:49.412745 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2plmb"] Jan 26 14:09:49 crc kubenswrapper[4844]: I0126 14:09:49.810278 4844 scope.go:117] "RemoveContainer" containerID="4046026fcb95fe2518442f8321c395f21bd4aebb2f64ae3184ad2be5ffc45ddf" Jan 26 14:09:50 crc kubenswrapper[4844]: I0126 14:09:50.313973 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:09:50 crc kubenswrapper[4844]: E0126 14:09:50.314827 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:09:51 crc kubenswrapper[4844]: I0126 14:09:51.334375 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fed2ad98-767c-4c8d-9317-a771d72617cf" path="/var/lib/kubelet/pods/fed2ad98-767c-4c8d-9317-a771d72617cf/volumes" Jan 26 14:10:03 crc kubenswrapper[4844]: I0126 14:10:03.321411 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:10:03 crc kubenswrapper[4844]: E0126 14:10:03.322302 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:10:15 crc kubenswrapper[4844]: I0126 14:10:15.313638 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:10:15 crc kubenswrapper[4844]: E0126 14:10:15.314813 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:10:26 crc kubenswrapper[4844]: I0126 14:10:26.313795 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:10:26 crc kubenswrapper[4844]: E0126 14:10:26.316181 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:10:40 crc kubenswrapper[4844]: I0126 14:10:40.314258 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:10:40 crc kubenswrapper[4844]: E0126 14:10:40.315009 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:10:51 crc kubenswrapper[4844]: I0126 14:10:51.313125 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:10:51 crc kubenswrapper[4844]: E0126 14:10:51.313892 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:11:03 crc kubenswrapper[4844]: I0126 14:11:03.322349 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:11:03 crc kubenswrapper[4844]: E0126 14:11:03.323105 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:11:17 crc kubenswrapper[4844]: I0126 14:11:17.313903 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:11:17 crc kubenswrapper[4844]: E0126 14:11:17.314630 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:11:29 crc kubenswrapper[4844]: I0126 14:11:29.314001 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:11:29 crc kubenswrapper[4844]: E0126 14:11:29.315003 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:11:42 crc kubenswrapper[4844]: I0126 14:11:42.314249 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:11:42 crc kubenswrapper[4844]: E0126 14:11:42.316798 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:11:55 crc kubenswrapper[4844]: I0126 14:11:55.314276 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:11:55 crc kubenswrapper[4844]: E0126 14:11:55.315722 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:12:08 crc kubenswrapper[4844]: I0126 14:12:08.313926 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:12:08 crc kubenswrapper[4844]: I0126 14:12:08.955489 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"559718bf083b95e9b7324cd4620495d37586d77e83a89c34fb9c0332383889a7"} Jan 26 14:14:36 crc kubenswrapper[4844]: I0126 14:14:36.364713 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:14:36 crc kubenswrapper[4844]: I0126 14:14:36.365383 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.579346 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4rx8c"] Jan 26 14:14:51 crc kubenswrapper[4844]: E0126 14:14:51.581149 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fed2ad98-767c-4c8d-9317-a771d72617cf" containerName="extract-content" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.581237 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="fed2ad98-767c-4c8d-9317-a771d72617cf" containerName="extract-content" Jan 26 14:14:51 crc kubenswrapper[4844]: E0126 14:14:51.581327 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fed2ad98-767c-4c8d-9317-a771d72617cf" containerName="registry-server" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.581389 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="fed2ad98-767c-4c8d-9317-a771d72617cf" containerName="registry-server" Jan 26 14:14:51 crc kubenswrapper[4844]: E0126 14:14:51.581477 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fed2ad98-767c-4c8d-9317-a771d72617cf" containerName="extract-utilities" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.581541 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="fed2ad98-767c-4c8d-9317-a771d72617cf" containerName="extract-utilities" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.581840 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="fed2ad98-767c-4c8d-9317-a771d72617cf" containerName="registry-server" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.583523 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rx8c" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.593225 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4rx8c"] Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.726430 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b22d228-f7f6-444c-a6ff-a4b22a533906-catalog-content\") pod \"certified-operators-4rx8c\" (UID: \"5b22d228-f7f6-444c-a6ff-a4b22a533906\") " pod="openshift-marketplace/certified-operators-4rx8c" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.726536 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmpjm\" (UniqueName: \"kubernetes.io/projected/5b22d228-f7f6-444c-a6ff-a4b22a533906-kube-api-access-kmpjm\") pod \"certified-operators-4rx8c\" (UID: \"5b22d228-f7f6-444c-a6ff-a4b22a533906\") " pod="openshift-marketplace/certified-operators-4rx8c" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.726629 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b22d228-f7f6-444c-a6ff-a4b22a533906-utilities\") pod \"certified-operators-4rx8c\" (UID: \"5b22d228-f7f6-444c-a6ff-a4b22a533906\") " pod="openshift-marketplace/certified-operators-4rx8c" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.753588 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x4h6x"] Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.756978 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4h6x" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.769038 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x4h6x"] Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.828021 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb6de9f-9097-4994-a2e8-3f3d442bb95d-utilities\") pod \"community-operators-x4h6x\" (UID: \"bcb6de9f-9097-4994-a2e8-3f3d442bb95d\") " pod="openshift-marketplace/community-operators-x4h6x" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.828090 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmpjm\" (UniqueName: \"kubernetes.io/projected/5b22d228-f7f6-444c-a6ff-a4b22a533906-kube-api-access-kmpjm\") pod \"certified-operators-4rx8c\" (UID: \"5b22d228-f7f6-444c-a6ff-a4b22a533906\") " pod="openshift-marketplace/certified-operators-4rx8c" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.828183 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b22d228-f7f6-444c-a6ff-a4b22a533906-utilities\") pod \"certified-operators-4rx8c\" (UID: \"5b22d228-f7f6-444c-a6ff-a4b22a533906\") " pod="openshift-marketplace/certified-operators-4rx8c" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.828218 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6dgp\" (UniqueName: \"kubernetes.io/projected/bcb6de9f-9097-4994-a2e8-3f3d442bb95d-kube-api-access-l6dgp\") pod \"community-operators-x4h6x\" (UID: \"bcb6de9f-9097-4994-a2e8-3f3d442bb95d\") " pod="openshift-marketplace/community-operators-x4h6x" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.828240 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb6de9f-9097-4994-a2e8-3f3d442bb95d-catalog-content\") pod \"community-operators-x4h6x\" (UID: \"bcb6de9f-9097-4994-a2e8-3f3d442bb95d\") " pod="openshift-marketplace/community-operators-x4h6x" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.828390 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b22d228-f7f6-444c-a6ff-a4b22a533906-catalog-content\") pod \"certified-operators-4rx8c\" (UID: \"5b22d228-f7f6-444c-a6ff-a4b22a533906\") " pod="openshift-marketplace/certified-operators-4rx8c" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.828993 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b22d228-f7f6-444c-a6ff-a4b22a533906-catalog-content\") pod \"certified-operators-4rx8c\" (UID: \"5b22d228-f7f6-444c-a6ff-a4b22a533906\") " pod="openshift-marketplace/certified-operators-4rx8c" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.828993 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b22d228-f7f6-444c-a6ff-a4b22a533906-utilities\") pod \"certified-operators-4rx8c\" (UID: \"5b22d228-f7f6-444c-a6ff-a4b22a533906\") " pod="openshift-marketplace/certified-operators-4rx8c" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.865420 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmpjm\" (UniqueName: \"kubernetes.io/projected/5b22d228-f7f6-444c-a6ff-a4b22a533906-kube-api-access-kmpjm\") pod \"certified-operators-4rx8c\" (UID: \"5b22d228-f7f6-444c-a6ff-a4b22a533906\") " pod="openshift-marketplace/certified-operators-4rx8c" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.930324 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6dgp\" (UniqueName: \"kubernetes.io/projected/bcb6de9f-9097-4994-a2e8-3f3d442bb95d-kube-api-access-l6dgp\") pod \"community-operators-x4h6x\" (UID: \"bcb6de9f-9097-4994-a2e8-3f3d442bb95d\") " pod="openshift-marketplace/community-operators-x4h6x" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.930707 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb6de9f-9097-4994-a2e8-3f3d442bb95d-catalog-content\") pod \"community-operators-x4h6x\" (UID: \"bcb6de9f-9097-4994-a2e8-3f3d442bb95d\") " pod="openshift-marketplace/community-operators-x4h6x" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.930900 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb6de9f-9097-4994-a2e8-3f3d442bb95d-utilities\") pod \"community-operators-x4h6x\" (UID: \"bcb6de9f-9097-4994-a2e8-3f3d442bb95d\") " pod="openshift-marketplace/community-operators-x4h6x" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.931470 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb6de9f-9097-4994-a2e8-3f3d442bb95d-catalog-content\") pod \"community-operators-x4h6x\" (UID: \"bcb6de9f-9097-4994-a2e8-3f3d442bb95d\") " pod="openshift-marketplace/community-operators-x4h6x" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.931534 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb6de9f-9097-4994-a2e8-3f3d442bb95d-utilities\") pod \"community-operators-x4h6x\" (UID: \"bcb6de9f-9097-4994-a2e8-3f3d442bb95d\") " pod="openshift-marketplace/community-operators-x4h6x" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.948027 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6dgp\" (UniqueName: \"kubernetes.io/projected/bcb6de9f-9097-4994-a2e8-3f3d442bb95d-kube-api-access-l6dgp\") pod \"community-operators-x4h6x\" (UID: \"bcb6de9f-9097-4994-a2e8-3f3d442bb95d\") " pod="openshift-marketplace/community-operators-x4h6x" Jan 26 14:14:51 crc kubenswrapper[4844]: I0126 14:14:51.951275 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rx8c" Jan 26 14:14:52 crc kubenswrapper[4844]: I0126 14:14:52.077404 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4h6x" Jan 26 14:14:52 crc kubenswrapper[4844]: I0126 14:14:52.442816 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4rx8c"] Jan 26 14:14:52 crc kubenswrapper[4844]: I0126 14:14:52.759966 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x4h6x"] Jan 26 14:14:52 crc kubenswrapper[4844]: I0126 14:14:52.834747 4844 generic.go:334] "Generic (PLEG): container finished" podID="5b22d228-f7f6-444c-a6ff-a4b22a533906" containerID="82bbecabd86dccfed18590cb76e1abfcc9ccb7aab2cb927f91c4837b6f8f5dc9" exitCode=0 Jan 26 14:14:52 crc kubenswrapper[4844]: I0126 14:14:52.834842 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rx8c" event={"ID":"5b22d228-f7f6-444c-a6ff-a4b22a533906","Type":"ContainerDied","Data":"82bbecabd86dccfed18590cb76e1abfcc9ccb7aab2cb927f91c4837b6f8f5dc9"} Jan 26 14:14:52 crc kubenswrapper[4844]: I0126 14:14:52.835434 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rx8c" event={"ID":"5b22d228-f7f6-444c-a6ff-a4b22a533906","Type":"ContainerStarted","Data":"eae81f32bb6918eae5fac23323252f094c43b4057a3dbd820719bf387427ef60"} Jan 26 14:14:52 crc kubenswrapper[4844]: I0126 14:14:52.837376 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:14:53 crc kubenswrapper[4844]: I0126 14:14:53.847867 4844 generic.go:334] "Generic (PLEG): container finished" podID="bcb6de9f-9097-4994-a2e8-3f3d442bb95d" containerID="bc54430ccd9b9d554bb09799fa52e1a99d1605d76b3cb0df42c46e42f43ee009" exitCode=0 Jan 26 14:14:53 crc kubenswrapper[4844]: I0126 14:14:53.847952 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4h6x" event={"ID":"bcb6de9f-9097-4994-a2e8-3f3d442bb95d","Type":"ContainerDied","Data":"bc54430ccd9b9d554bb09799fa52e1a99d1605d76b3cb0df42c46e42f43ee009"} Jan 26 14:14:53 crc kubenswrapper[4844]: I0126 14:14:53.848234 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4h6x" event={"ID":"bcb6de9f-9097-4994-a2e8-3f3d442bb95d","Type":"ContainerStarted","Data":"7cf5fb89e1acb9d7d5d3d3744945b1bd2636a1fd0c592bf7019cef7aa0752e5b"} Jan 26 14:14:54 crc kubenswrapper[4844]: I0126 14:14:54.862924 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rx8c" event={"ID":"5b22d228-f7f6-444c-a6ff-a4b22a533906","Type":"ContainerStarted","Data":"ca87fbdc2a912f4d0c99cabab0fc6b87799f1c85cb14b937fa54b9c6417c3cd8"} Jan 26 14:14:55 crc kubenswrapper[4844]: I0126 14:14:55.876387 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4h6x" event={"ID":"bcb6de9f-9097-4994-a2e8-3f3d442bb95d","Type":"ContainerStarted","Data":"9217012086d5d12bab863cb10da7d38a22f46008a7bcd17fda77e2d7e3fb72a3"} Jan 26 14:14:55 crc kubenswrapper[4844]: I0126 14:14:55.881016 4844 generic.go:334] "Generic (PLEG): container finished" podID="5b22d228-f7f6-444c-a6ff-a4b22a533906" containerID="ca87fbdc2a912f4d0c99cabab0fc6b87799f1c85cb14b937fa54b9c6417c3cd8" exitCode=0 Jan 26 14:14:55 crc kubenswrapper[4844]: I0126 14:14:55.881111 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rx8c" event={"ID":"5b22d228-f7f6-444c-a6ff-a4b22a533906","Type":"ContainerDied","Data":"ca87fbdc2a912f4d0c99cabab0fc6b87799f1c85cb14b937fa54b9c6417c3cd8"} Jan 26 14:14:58 crc kubenswrapper[4844]: I0126 14:14:58.912484 4844 generic.go:334] "Generic (PLEG): container finished" podID="bcb6de9f-9097-4994-a2e8-3f3d442bb95d" containerID="9217012086d5d12bab863cb10da7d38a22f46008a7bcd17fda77e2d7e3fb72a3" exitCode=0 Jan 26 14:14:58 crc kubenswrapper[4844]: I0126 14:14:58.912626 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4h6x" event={"ID":"bcb6de9f-9097-4994-a2e8-3f3d442bb95d","Type":"ContainerDied","Data":"9217012086d5d12bab863cb10da7d38a22f46008a7bcd17fda77e2d7e3fb72a3"} Jan 26 14:14:58 crc kubenswrapper[4844]: I0126 14:14:58.915873 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rx8c" event={"ID":"5b22d228-f7f6-444c-a6ff-a4b22a533906","Type":"ContainerStarted","Data":"a264e427b221cfb1222d5cd64d473e96302eb696649a1f6890343a63a66dbc33"} Jan 26 14:14:58 crc kubenswrapper[4844]: I0126 14:14:58.953228 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4rx8c" podStartSLOduration=2.496566805 podStartE2EDuration="7.953208914s" podCreationTimestamp="2026-01-26 14:14:51 +0000 UTC" firstStartedPulling="2026-01-26 14:14:52.836997725 +0000 UTC m=+5469.770365347" lastFinishedPulling="2026-01-26 14:14:58.293639804 +0000 UTC m=+5475.227007456" observedRunningTime="2026-01-26 14:14:58.950107439 +0000 UTC m=+5475.883475081" watchObservedRunningTime="2026-01-26 14:14:58.953208914 +0000 UTC m=+5475.886576526" Jan 26 14:14:59 crc kubenswrapper[4844]: I0126 14:14:59.929240 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4h6x" event={"ID":"bcb6de9f-9097-4994-a2e8-3f3d442bb95d","Type":"ContainerStarted","Data":"7eaec19eef0cd6dc5542c0ac8732ed7c096f41ad8f9f9b85cfd5bd3b41e70812"} Jan 26 14:14:59 crc kubenswrapper[4844]: I0126 14:14:59.958208 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x4h6x" podStartSLOduration=3.489903409 podStartE2EDuration="8.958183229s" podCreationTimestamp="2026-01-26 14:14:51 +0000 UTC" firstStartedPulling="2026-01-26 14:14:53.84973951 +0000 UTC m=+5470.783107132" lastFinishedPulling="2026-01-26 14:14:59.31801934 +0000 UTC m=+5476.251386952" observedRunningTime="2026-01-26 14:14:59.948402652 +0000 UTC m=+5476.881770284" watchObservedRunningTime="2026-01-26 14:14:59.958183229 +0000 UTC m=+5476.891550841" Jan 26 14:15:00 crc kubenswrapper[4844]: I0126 14:15:00.151964 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj"] Jan 26 14:15:00 crc kubenswrapper[4844]: I0126 14:15:00.153816 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj" Jan 26 14:15:00 crc kubenswrapper[4844]: I0126 14:15:00.159063 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 14:15:00 crc kubenswrapper[4844]: I0126 14:15:00.159161 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 14:15:00 crc kubenswrapper[4844]: I0126 14:15:00.178545 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj"] Jan 26 14:15:00 crc kubenswrapper[4844]: I0126 14:15:00.226608 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c24428f-8915-4e8d-b054-14f7df0caa5b-config-volume\") pod \"collect-profiles-29490615-wrnkj\" (UID: \"0c24428f-8915-4e8d-b054-14f7df0caa5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj" Jan 26 14:15:00 crc kubenswrapper[4844]: I0126 14:15:00.226803 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c22fb\" (UniqueName: \"kubernetes.io/projected/0c24428f-8915-4e8d-b054-14f7df0caa5b-kube-api-access-c22fb\") pod \"collect-profiles-29490615-wrnkj\" (UID: \"0c24428f-8915-4e8d-b054-14f7df0caa5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj" Jan 26 14:15:00 crc kubenswrapper[4844]: I0126 14:15:00.226913 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c24428f-8915-4e8d-b054-14f7df0caa5b-secret-volume\") pod \"collect-profiles-29490615-wrnkj\" (UID: \"0c24428f-8915-4e8d-b054-14f7df0caa5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj" Jan 26 14:15:00 crc kubenswrapper[4844]: I0126 14:15:00.329086 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c24428f-8915-4e8d-b054-14f7df0caa5b-secret-volume\") pod \"collect-profiles-29490615-wrnkj\" (UID: \"0c24428f-8915-4e8d-b054-14f7df0caa5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj" Jan 26 14:15:00 crc kubenswrapper[4844]: I0126 14:15:00.329181 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c24428f-8915-4e8d-b054-14f7df0caa5b-config-volume\") pod \"collect-profiles-29490615-wrnkj\" (UID: \"0c24428f-8915-4e8d-b054-14f7df0caa5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj" Jan 26 14:15:00 crc kubenswrapper[4844]: I0126 14:15:00.329295 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c22fb\" (UniqueName: \"kubernetes.io/projected/0c24428f-8915-4e8d-b054-14f7df0caa5b-kube-api-access-c22fb\") pod \"collect-profiles-29490615-wrnkj\" (UID: \"0c24428f-8915-4e8d-b054-14f7df0caa5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj" Jan 26 14:15:00 crc kubenswrapper[4844]: I0126 14:15:00.331168 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c24428f-8915-4e8d-b054-14f7df0caa5b-config-volume\") pod \"collect-profiles-29490615-wrnkj\" (UID: \"0c24428f-8915-4e8d-b054-14f7df0caa5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj" Jan 26 14:15:00 crc kubenswrapper[4844]: I0126 14:15:00.343892 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c24428f-8915-4e8d-b054-14f7df0caa5b-secret-volume\") pod \"collect-profiles-29490615-wrnkj\" (UID: \"0c24428f-8915-4e8d-b054-14f7df0caa5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj" Jan 26 14:15:00 crc kubenswrapper[4844]: I0126 14:15:00.348437 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c22fb\" (UniqueName: \"kubernetes.io/projected/0c24428f-8915-4e8d-b054-14f7df0caa5b-kube-api-access-c22fb\") pod \"collect-profiles-29490615-wrnkj\" (UID: \"0c24428f-8915-4e8d-b054-14f7df0caa5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj" Jan 26 14:15:00 crc kubenswrapper[4844]: I0126 14:15:00.484833 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj" Jan 26 14:15:00 crc kubenswrapper[4844]: I0126 14:15:00.965768 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj"] Jan 26 14:15:01 crc kubenswrapper[4844]: I0126 14:15:01.947357 4844 generic.go:334] "Generic (PLEG): container finished" podID="0c24428f-8915-4e8d-b054-14f7df0caa5b" containerID="9901ab87665d38878089316bf456d790b44b9234e745133b6096d0af5eacf574" exitCode=0 Jan 26 14:15:01 crc kubenswrapper[4844]: I0126 14:15:01.947777 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj" event={"ID":"0c24428f-8915-4e8d-b054-14f7df0caa5b","Type":"ContainerDied","Data":"9901ab87665d38878089316bf456d790b44b9234e745133b6096d0af5eacf574"} Jan 26 14:15:01 crc kubenswrapper[4844]: I0126 14:15:01.947805 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj" event={"ID":"0c24428f-8915-4e8d-b054-14f7df0caa5b","Type":"ContainerStarted","Data":"e18242a9f5f09ef269c84d9c920f599078c1b2e75d7b1c1dc5d19a3f4eebf049"} Jan 26 14:15:01 crc kubenswrapper[4844]: I0126 14:15:01.951953 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4rx8c" Jan 26 14:15:01 crc kubenswrapper[4844]: I0126 14:15:01.952122 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4rx8c" Jan 26 14:15:02 crc kubenswrapper[4844]: I0126 14:15:02.006576 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4rx8c" Jan 26 14:15:02 crc kubenswrapper[4844]: I0126 14:15:02.078117 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x4h6x" Jan 26 14:15:02 crc kubenswrapper[4844]: I0126 14:15:02.078164 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x4h6x" Jan 26 14:15:03 crc kubenswrapper[4844]: I0126 14:15:03.125883 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-x4h6x" podUID="bcb6de9f-9097-4994-a2e8-3f3d442bb95d" containerName="registry-server" probeResult="failure" output=< Jan 26 14:15:03 crc kubenswrapper[4844]: timeout: failed to connect service ":50051" within 1s Jan 26 14:15:03 crc kubenswrapper[4844]: > Jan 26 14:15:03 crc kubenswrapper[4844]: I0126 14:15:03.803993 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj" Jan 26 14:15:03 crc kubenswrapper[4844]: I0126 14:15:03.913417 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c24428f-8915-4e8d-b054-14f7df0caa5b-config-volume\") pod \"0c24428f-8915-4e8d-b054-14f7df0caa5b\" (UID: \"0c24428f-8915-4e8d-b054-14f7df0caa5b\") " Jan 26 14:15:03 crc kubenswrapper[4844]: I0126 14:15:03.913768 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c24428f-8915-4e8d-b054-14f7df0caa5b-secret-volume\") pod \"0c24428f-8915-4e8d-b054-14f7df0caa5b\" (UID: \"0c24428f-8915-4e8d-b054-14f7df0caa5b\") " Jan 26 14:15:03 crc kubenswrapper[4844]: I0126 14:15:03.913812 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c22fb\" (UniqueName: \"kubernetes.io/projected/0c24428f-8915-4e8d-b054-14f7df0caa5b-kube-api-access-c22fb\") pod \"0c24428f-8915-4e8d-b054-14f7df0caa5b\" (UID: \"0c24428f-8915-4e8d-b054-14f7df0caa5b\") " Jan 26 14:15:03 crc kubenswrapper[4844]: I0126 14:15:03.914480 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c24428f-8915-4e8d-b054-14f7df0caa5b-config-volume" (OuterVolumeSpecName: "config-volume") pod "0c24428f-8915-4e8d-b054-14f7df0caa5b" (UID: "0c24428f-8915-4e8d-b054-14f7df0caa5b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:15:03 crc kubenswrapper[4844]: I0126 14:15:03.922461 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c24428f-8915-4e8d-b054-14f7df0caa5b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0c24428f-8915-4e8d-b054-14f7df0caa5b" (UID: "0c24428f-8915-4e8d-b054-14f7df0caa5b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:15:03 crc kubenswrapper[4844]: I0126 14:15:03.928993 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c24428f-8915-4e8d-b054-14f7df0caa5b-kube-api-access-c22fb" (OuterVolumeSpecName: "kube-api-access-c22fb") pod "0c24428f-8915-4e8d-b054-14f7df0caa5b" (UID: "0c24428f-8915-4e8d-b054-14f7df0caa5b"). InnerVolumeSpecName "kube-api-access-c22fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:15:03 crc kubenswrapper[4844]: I0126 14:15:03.976318 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj" event={"ID":"0c24428f-8915-4e8d-b054-14f7df0caa5b","Type":"ContainerDied","Data":"e18242a9f5f09ef269c84d9c920f599078c1b2e75d7b1c1dc5d19a3f4eebf049"} Jan 26 14:15:03 crc kubenswrapper[4844]: I0126 14:15:03.976348 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj" Jan 26 14:15:03 crc kubenswrapper[4844]: I0126 14:15:03.976368 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e18242a9f5f09ef269c84d9c920f599078c1b2e75d7b1c1dc5d19a3f4eebf049" Jan 26 14:15:04 crc kubenswrapper[4844]: I0126 14:15:04.016920 4844 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c24428f-8915-4e8d-b054-14f7df0caa5b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:04 crc kubenswrapper[4844]: I0126 14:15:04.016963 4844 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c24428f-8915-4e8d-b054-14f7df0caa5b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:04 crc kubenswrapper[4844]: I0126 14:15:04.016981 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c22fb\" (UniqueName: \"kubernetes.io/projected/0c24428f-8915-4e8d-b054-14f7df0caa5b-kube-api-access-c22fb\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:04 crc kubenswrapper[4844]: I0126 14:15:04.047435 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4rx8c" Jan 26 14:15:04 crc kubenswrapper[4844]: I0126 14:15:04.900190 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj"] Jan 26 14:15:04 crc kubenswrapper[4844]: I0126 14:15:04.911783 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490570-jn8qj"] Jan 26 14:15:05 crc kubenswrapper[4844]: I0126 14:15:05.332152 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72d96d87-2177-4714-8ca6-e9e4f4192f3b" path="/var/lib/kubelet/pods/72d96d87-2177-4714-8ca6-e9e4f4192f3b/volumes" Jan 26 14:15:05 crc kubenswrapper[4844]: I0126 14:15:05.545987 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4rx8c"] Jan 26 14:15:05 crc kubenswrapper[4844]: I0126 14:15:05.996267 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4rx8c" podUID="5b22d228-f7f6-444c-a6ff-a4b22a533906" containerName="registry-server" containerID="cri-o://a264e427b221cfb1222d5cd64d473e96302eb696649a1f6890343a63a66dbc33" gracePeriod=2 Jan 26 14:15:06 crc kubenswrapper[4844]: I0126 14:15:06.364335 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:15:06 crc kubenswrapper[4844]: I0126 14:15:06.364392 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:15:06 crc kubenswrapper[4844]: I0126 14:15:06.492081 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rx8c" Jan 26 14:15:06 crc kubenswrapper[4844]: I0126 14:15:06.572283 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b22d228-f7f6-444c-a6ff-a4b22a533906-catalog-content\") pod \"5b22d228-f7f6-444c-a6ff-a4b22a533906\" (UID: \"5b22d228-f7f6-444c-a6ff-a4b22a533906\") " Jan 26 14:15:06 crc kubenswrapper[4844]: I0126 14:15:06.576312 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b22d228-f7f6-444c-a6ff-a4b22a533906-utilities\") pod \"5b22d228-f7f6-444c-a6ff-a4b22a533906\" (UID: \"5b22d228-f7f6-444c-a6ff-a4b22a533906\") " Jan 26 14:15:06 crc kubenswrapper[4844]: I0126 14:15:06.576399 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmpjm\" (UniqueName: \"kubernetes.io/projected/5b22d228-f7f6-444c-a6ff-a4b22a533906-kube-api-access-kmpjm\") pod \"5b22d228-f7f6-444c-a6ff-a4b22a533906\" (UID: \"5b22d228-f7f6-444c-a6ff-a4b22a533906\") " Jan 26 14:15:06 crc kubenswrapper[4844]: I0126 14:15:06.577051 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b22d228-f7f6-444c-a6ff-a4b22a533906-utilities" (OuterVolumeSpecName: "utilities") pod "5b22d228-f7f6-444c-a6ff-a4b22a533906" (UID: "5b22d228-f7f6-444c-a6ff-a4b22a533906"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:15:06 crc kubenswrapper[4844]: I0126 14:15:06.577727 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b22d228-f7f6-444c-a6ff-a4b22a533906-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:06 crc kubenswrapper[4844]: I0126 14:15:06.586576 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b22d228-f7f6-444c-a6ff-a4b22a533906-kube-api-access-kmpjm" (OuterVolumeSpecName: "kube-api-access-kmpjm") pod "5b22d228-f7f6-444c-a6ff-a4b22a533906" (UID: "5b22d228-f7f6-444c-a6ff-a4b22a533906"). InnerVolumeSpecName "kube-api-access-kmpjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:15:06 crc kubenswrapper[4844]: I0126 14:15:06.621353 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b22d228-f7f6-444c-a6ff-a4b22a533906-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b22d228-f7f6-444c-a6ff-a4b22a533906" (UID: "5b22d228-f7f6-444c-a6ff-a4b22a533906"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:15:06 crc kubenswrapper[4844]: I0126 14:15:06.679275 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b22d228-f7f6-444c-a6ff-a4b22a533906-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:06 crc kubenswrapper[4844]: I0126 14:15:06.679305 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmpjm\" (UniqueName: \"kubernetes.io/projected/5b22d228-f7f6-444c-a6ff-a4b22a533906-kube-api-access-kmpjm\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:07 crc kubenswrapper[4844]: I0126 14:15:07.008084 4844 generic.go:334] "Generic (PLEG): container finished" podID="5b22d228-f7f6-444c-a6ff-a4b22a533906" containerID="a264e427b221cfb1222d5cd64d473e96302eb696649a1f6890343a63a66dbc33" exitCode=0 Jan 26 14:15:07 crc kubenswrapper[4844]: I0126 14:15:07.008153 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rx8c" Jan 26 14:15:07 crc kubenswrapper[4844]: I0126 14:15:07.008172 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rx8c" event={"ID":"5b22d228-f7f6-444c-a6ff-a4b22a533906","Type":"ContainerDied","Data":"a264e427b221cfb1222d5cd64d473e96302eb696649a1f6890343a63a66dbc33"} Jan 26 14:15:07 crc kubenswrapper[4844]: I0126 14:15:07.009808 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rx8c" event={"ID":"5b22d228-f7f6-444c-a6ff-a4b22a533906","Type":"ContainerDied","Data":"eae81f32bb6918eae5fac23323252f094c43b4057a3dbd820719bf387427ef60"} Jan 26 14:15:07 crc kubenswrapper[4844]: I0126 14:15:07.009842 4844 scope.go:117] "RemoveContainer" containerID="a264e427b221cfb1222d5cd64d473e96302eb696649a1f6890343a63a66dbc33" Jan 26 14:15:07 crc kubenswrapper[4844]: I0126 14:15:07.033942 4844 scope.go:117] "RemoveContainer" containerID="ca87fbdc2a912f4d0c99cabab0fc6b87799f1c85cb14b937fa54b9c6417c3cd8" Jan 26 14:15:07 crc kubenswrapper[4844]: I0126 14:15:07.047641 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4rx8c"] Jan 26 14:15:07 crc kubenswrapper[4844]: I0126 14:15:07.061163 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4rx8c"] Jan 26 14:15:07 crc kubenswrapper[4844]: I0126 14:15:07.067865 4844 scope.go:117] "RemoveContainer" containerID="82bbecabd86dccfed18590cb76e1abfcc9ccb7aab2cb927f91c4837b6f8f5dc9" Jan 26 14:15:07 crc kubenswrapper[4844]: I0126 14:15:07.137259 4844 scope.go:117] "RemoveContainer" containerID="a264e427b221cfb1222d5cd64d473e96302eb696649a1f6890343a63a66dbc33" Jan 26 14:15:07 crc kubenswrapper[4844]: E0126 14:15:07.137643 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a264e427b221cfb1222d5cd64d473e96302eb696649a1f6890343a63a66dbc33\": container with ID starting with a264e427b221cfb1222d5cd64d473e96302eb696649a1f6890343a63a66dbc33 not found: ID does not exist" containerID="a264e427b221cfb1222d5cd64d473e96302eb696649a1f6890343a63a66dbc33" Jan 26 14:15:07 crc kubenswrapper[4844]: I0126 14:15:07.137787 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a264e427b221cfb1222d5cd64d473e96302eb696649a1f6890343a63a66dbc33"} err="failed to get container status \"a264e427b221cfb1222d5cd64d473e96302eb696649a1f6890343a63a66dbc33\": rpc error: code = NotFound desc = could not find container \"a264e427b221cfb1222d5cd64d473e96302eb696649a1f6890343a63a66dbc33\": container with ID starting with a264e427b221cfb1222d5cd64d473e96302eb696649a1f6890343a63a66dbc33 not found: ID does not exist" Jan 26 14:15:07 crc kubenswrapper[4844]: I0126 14:15:07.137869 4844 scope.go:117] "RemoveContainer" containerID="ca87fbdc2a912f4d0c99cabab0fc6b87799f1c85cb14b937fa54b9c6417c3cd8" Jan 26 14:15:07 crc kubenswrapper[4844]: E0126 14:15:07.138367 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca87fbdc2a912f4d0c99cabab0fc6b87799f1c85cb14b937fa54b9c6417c3cd8\": container with ID starting with ca87fbdc2a912f4d0c99cabab0fc6b87799f1c85cb14b937fa54b9c6417c3cd8 not found: ID does not exist" containerID="ca87fbdc2a912f4d0c99cabab0fc6b87799f1c85cb14b937fa54b9c6417c3cd8" Jan 26 14:15:07 crc kubenswrapper[4844]: I0126 14:15:07.138445 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca87fbdc2a912f4d0c99cabab0fc6b87799f1c85cb14b937fa54b9c6417c3cd8"} err="failed to get container status \"ca87fbdc2a912f4d0c99cabab0fc6b87799f1c85cb14b937fa54b9c6417c3cd8\": rpc error: code = NotFound desc = could not find container \"ca87fbdc2a912f4d0c99cabab0fc6b87799f1c85cb14b937fa54b9c6417c3cd8\": container with ID starting with ca87fbdc2a912f4d0c99cabab0fc6b87799f1c85cb14b937fa54b9c6417c3cd8 not found: ID does not exist" Jan 26 14:15:07 crc kubenswrapper[4844]: I0126 14:15:07.138519 4844 scope.go:117] "RemoveContainer" containerID="82bbecabd86dccfed18590cb76e1abfcc9ccb7aab2cb927f91c4837b6f8f5dc9" Jan 26 14:15:07 crc kubenswrapper[4844]: E0126 14:15:07.139047 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82bbecabd86dccfed18590cb76e1abfcc9ccb7aab2cb927f91c4837b6f8f5dc9\": container with ID starting with 82bbecabd86dccfed18590cb76e1abfcc9ccb7aab2cb927f91c4837b6f8f5dc9 not found: ID does not exist" containerID="82bbecabd86dccfed18590cb76e1abfcc9ccb7aab2cb927f91c4837b6f8f5dc9" Jan 26 14:15:07 crc kubenswrapper[4844]: I0126 14:15:07.139129 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82bbecabd86dccfed18590cb76e1abfcc9ccb7aab2cb927f91c4837b6f8f5dc9"} err="failed to get container status \"82bbecabd86dccfed18590cb76e1abfcc9ccb7aab2cb927f91c4837b6f8f5dc9\": rpc error: code = NotFound desc = could not find container \"82bbecabd86dccfed18590cb76e1abfcc9ccb7aab2cb927f91c4837b6f8f5dc9\": container with ID starting with 82bbecabd86dccfed18590cb76e1abfcc9ccb7aab2cb927f91c4837b6f8f5dc9 not found: ID does not exist" Jan 26 14:15:07 crc kubenswrapper[4844]: I0126 14:15:07.340405 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b22d228-f7f6-444c-a6ff-a4b22a533906" path="/var/lib/kubelet/pods/5b22d228-f7f6-444c-a6ff-a4b22a533906/volumes" Jan 26 14:15:12 crc kubenswrapper[4844]: I0126 14:15:12.140378 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x4h6x" Jan 26 14:15:12 crc kubenswrapper[4844]: I0126 14:15:12.212170 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x4h6x" Jan 26 14:15:12 crc kubenswrapper[4844]: I0126 14:15:12.947813 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x4h6x"] Jan 26 14:15:14 crc kubenswrapper[4844]: I0126 14:15:14.094800 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x4h6x" podUID="bcb6de9f-9097-4994-a2e8-3f3d442bb95d" containerName="registry-server" containerID="cri-o://7eaec19eef0cd6dc5542c0ac8732ed7c096f41ad8f9f9b85cfd5bd3b41e70812" gracePeriod=2 Jan 26 14:15:14 crc kubenswrapper[4844]: I0126 14:15:14.611050 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4h6x" Jan 26 14:15:14 crc kubenswrapper[4844]: I0126 14:15:14.671964 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb6de9f-9097-4994-a2e8-3f3d442bb95d-utilities\") pod \"bcb6de9f-9097-4994-a2e8-3f3d442bb95d\" (UID: \"bcb6de9f-9097-4994-a2e8-3f3d442bb95d\") " Jan 26 14:15:14 crc kubenswrapper[4844]: I0126 14:15:14.672066 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6dgp\" (UniqueName: \"kubernetes.io/projected/bcb6de9f-9097-4994-a2e8-3f3d442bb95d-kube-api-access-l6dgp\") pod \"bcb6de9f-9097-4994-a2e8-3f3d442bb95d\" (UID: \"bcb6de9f-9097-4994-a2e8-3f3d442bb95d\") " Jan 26 14:15:14 crc kubenswrapper[4844]: I0126 14:15:14.672187 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb6de9f-9097-4994-a2e8-3f3d442bb95d-catalog-content\") pod \"bcb6de9f-9097-4994-a2e8-3f3d442bb95d\" (UID: \"bcb6de9f-9097-4994-a2e8-3f3d442bb95d\") " Jan 26 14:15:14 crc kubenswrapper[4844]: I0126 14:15:14.673034 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb6de9f-9097-4994-a2e8-3f3d442bb95d-utilities" (OuterVolumeSpecName: "utilities") pod "bcb6de9f-9097-4994-a2e8-3f3d442bb95d" (UID: "bcb6de9f-9097-4994-a2e8-3f3d442bb95d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:15:14 crc kubenswrapper[4844]: I0126 14:15:14.678210 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb6de9f-9097-4994-a2e8-3f3d442bb95d-kube-api-access-l6dgp" (OuterVolumeSpecName: "kube-api-access-l6dgp") pod "bcb6de9f-9097-4994-a2e8-3f3d442bb95d" (UID: "bcb6de9f-9097-4994-a2e8-3f3d442bb95d"). InnerVolumeSpecName "kube-api-access-l6dgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:15:14 crc kubenswrapper[4844]: I0126 14:15:14.748312 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb6de9f-9097-4994-a2e8-3f3d442bb95d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bcb6de9f-9097-4994-a2e8-3f3d442bb95d" (UID: "bcb6de9f-9097-4994-a2e8-3f3d442bb95d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:15:14 crc kubenswrapper[4844]: I0126 14:15:14.774361 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb6de9f-9097-4994-a2e8-3f3d442bb95d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:14 crc kubenswrapper[4844]: I0126 14:15:14.774412 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6dgp\" (UniqueName: \"kubernetes.io/projected/bcb6de9f-9097-4994-a2e8-3f3d442bb95d-kube-api-access-l6dgp\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:14 crc kubenswrapper[4844]: I0126 14:15:14.774431 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb6de9f-9097-4994-a2e8-3f3d442bb95d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:15 crc kubenswrapper[4844]: I0126 14:15:15.109868 4844 generic.go:334] "Generic (PLEG): container finished" podID="bcb6de9f-9097-4994-a2e8-3f3d442bb95d" containerID="7eaec19eef0cd6dc5542c0ac8732ed7c096f41ad8f9f9b85cfd5bd3b41e70812" exitCode=0 Jan 26 14:15:15 crc kubenswrapper[4844]: I0126 14:15:15.109918 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4h6x" event={"ID":"bcb6de9f-9097-4994-a2e8-3f3d442bb95d","Type":"ContainerDied","Data":"7eaec19eef0cd6dc5542c0ac8732ed7c096f41ad8f9f9b85cfd5bd3b41e70812"} Jan 26 14:15:15 crc kubenswrapper[4844]: I0126 14:15:15.109959 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4h6x" event={"ID":"bcb6de9f-9097-4994-a2e8-3f3d442bb95d","Type":"ContainerDied","Data":"7cf5fb89e1acb9d7d5d3d3744945b1bd2636a1fd0c592bf7019cef7aa0752e5b"} Jan 26 14:15:15 crc kubenswrapper[4844]: I0126 14:15:15.109965 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4h6x" Jan 26 14:15:15 crc kubenswrapper[4844]: I0126 14:15:15.109977 4844 scope.go:117] "RemoveContainer" containerID="7eaec19eef0cd6dc5542c0ac8732ed7c096f41ad8f9f9b85cfd5bd3b41e70812" Jan 26 14:15:15 crc kubenswrapper[4844]: I0126 14:15:15.155137 4844 scope.go:117] "RemoveContainer" containerID="9217012086d5d12bab863cb10da7d38a22f46008a7bcd17fda77e2d7e3fb72a3" Jan 26 14:15:15 crc kubenswrapper[4844]: I0126 14:15:15.163321 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x4h6x"] Jan 26 14:15:15 crc kubenswrapper[4844]: I0126 14:15:15.179765 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x4h6x"] Jan 26 14:15:15 crc kubenswrapper[4844]: I0126 14:15:15.193259 4844 scope.go:117] "RemoveContainer" containerID="bc54430ccd9b9d554bb09799fa52e1a99d1605d76b3cb0df42c46e42f43ee009" Jan 26 14:15:15 crc kubenswrapper[4844]: I0126 14:15:15.236511 4844 scope.go:117] "RemoveContainer" containerID="7eaec19eef0cd6dc5542c0ac8732ed7c096f41ad8f9f9b85cfd5bd3b41e70812" Jan 26 14:15:15 crc kubenswrapper[4844]: E0126 14:15:15.236923 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eaec19eef0cd6dc5542c0ac8732ed7c096f41ad8f9f9b85cfd5bd3b41e70812\": container with ID starting with 7eaec19eef0cd6dc5542c0ac8732ed7c096f41ad8f9f9b85cfd5bd3b41e70812 not found: ID does not exist" containerID="7eaec19eef0cd6dc5542c0ac8732ed7c096f41ad8f9f9b85cfd5bd3b41e70812" Jan 26 14:15:15 crc kubenswrapper[4844]: I0126 14:15:15.236965 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eaec19eef0cd6dc5542c0ac8732ed7c096f41ad8f9f9b85cfd5bd3b41e70812"} err="failed to get container status \"7eaec19eef0cd6dc5542c0ac8732ed7c096f41ad8f9f9b85cfd5bd3b41e70812\": rpc error: code = NotFound desc = could not find container \"7eaec19eef0cd6dc5542c0ac8732ed7c096f41ad8f9f9b85cfd5bd3b41e70812\": container with ID starting with 7eaec19eef0cd6dc5542c0ac8732ed7c096f41ad8f9f9b85cfd5bd3b41e70812 not found: ID does not exist" Jan 26 14:15:15 crc kubenswrapper[4844]: I0126 14:15:15.236994 4844 scope.go:117] "RemoveContainer" containerID="9217012086d5d12bab863cb10da7d38a22f46008a7bcd17fda77e2d7e3fb72a3" Jan 26 14:15:15 crc kubenswrapper[4844]: E0126 14:15:15.237212 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9217012086d5d12bab863cb10da7d38a22f46008a7bcd17fda77e2d7e3fb72a3\": container with ID starting with 9217012086d5d12bab863cb10da7d38a22f46008a7bcd17fda77e2d7e3fb72a3 not found: ID does not exist" containerID="9217012086d5d12bab863cb10da7d38a22f46008a7bcd17fda77e2d7e3fb72a3" Jan 26 14:15:15 crc kubenswrapper[4844]: I0126 14:15:15.237266 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9217012086d5d12bab863cb10da7d38a22f46008a7bcd17fda77e2d7e3fb72a3"} err="failed to get container status \"9217012086d5d12bab863cb10da7d38a22f46008a7bcd17fda77e2d7e3fb72a3\": rpc error: code = NotFound desc = could not find container \"9217012086d5d12bab863cb10da7d38a22f46008a7bcd17fda77e2d7e3fb72a3\": container with ID starting with 9217012086d5d12bab863cb10da7d38a22f46008a7bcd17fda77e2d7e3fb72a3 not found: ID does not exist" Jan 26 14:15:15 crc kubenswrapper[4844]: I0126 14:15:15.237286 4844 scope.go:117] "RemoveContainer" containerID="bc54430ccd9b9d554bb09799fa52e1a99d1605d76b3cb0df42c46e42f43ee009" Jan 26 14:15:15 crc kubenswrapper[4844]: E0126 14:15:15.237543 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc54430ccd9b9d554bb09799fa52e1a99d1605d76b3cb0df42c46e42f43ee009\": container with ID starting with bc54430ccd9b9d554bb09799fa52e1a99d1605d76b3cb0df42c46e42f43ee009 not found: ID does not exist" containerID="bc54430ccd9b9d554bb09799fa52e1a99d1605d76b3cb0df42c46e42f43ee009" Jan 26 14:15:15 crc kubenswrapper[4844]: I0126 14:15:15.237567 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc54430ccd9b9d554bb09799fa52e1a99d1605d76b3cb0df42c46e42f43ee009"} err="failed to get container status \"bc54430ccd9b9d554bb09799fa52e1a99d1605d76b3cb0df42c46e42f43ee009\": rpc error: code = NotFound desc = could not find container \"bc54430ccd9b9d554bb09799fa52e1a99d1605d76b3cb0df42c46e42f43ee009\": container with ID starting with bc54430ccd9b9d554bb09799fa52e1a99d1605d76b3cb0df42c46e42f43ee009 not found: ID does not exist" Jan 26 14:15:15 crc kubenswrapper[4844]: I0126 14:15:15.326079 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb6de9f-9097-4994-a2e8-3f3d442bb95d" path="/var/lib/kubelet/pods/bcb6de9f-9097-4994-a2e8-3f3d442bb95d/volumes" Jan 26 14:15:36 crc kubenswrapper[4844]: I0126 14:15:36.364691 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:15:36 crc kubenswrapper[4844]: I0126 14:15:36.365244 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:15:36 crc kubenswrapper[4844]: I0126 14:15:36.365298 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 14:15:36 crc kubenswrapper[4844]: I0126 14:15:36.366358 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"559718bf083b95e9b7324cd4620495d37586d77e83a89c34fb9c0332383889a7"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:15:36 crc kubenswrapper[4844]: I0126 14:15:36.366415 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://559718bf083b95e9b7324cd4620495d37586d77e83a89c34fb9c0332383889a7" gracePeriod=600 Jan 26 14:15:37 crc kubenswrapper[4844]: I0126 14:15:37.329179 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="559718bf083b95e9b7324cd4620495d37586d77e83a89c34fb9c0332383889a7" exitCode=0 Jan 26 14:15:37 crc kubenswrapper[4844]: I0126 14:15:37.329262 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"559718bf083b95e9b7324cd4620495d37586d77e83a89c34fb9c0332383889a7"} Jan 26 14:15:37 crc kubenswrapper[4844]: I0126 14:15:37.329738 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8"} Jan 26 14:15:37 crc kubenswrapper[4844]: I0126 14:15:37.329765 4844 scope.go:117] "RemoveContainer" containerID="86b7cf326f578bbdf65f96cb187aab18331f8535398d4d1c7e9c9dae05da383b" Jan 26 14:15:39 crc kubenswrapper[4844]: I0126 14:15:39.569954 4844 scope.go:117] "RemoveContainer" containerID="01fcb8f1b34b695ddc0e349c4093834025a5fa9a9b9c2aa13f5cbdd436b18671" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.689208 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ttxlz"] Jan 26 14:16:46 crc kubenswrapper[4844]: E0126 14:16:46.691724 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b22d228-f7f6-444c-a6ff-a4b22a533906" containerName="extract-content" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.691754 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b22d228-f7f6-444c-a6ff-a4b22a533906" containerName="extract-content" Jan 26 14:16:46 crc kubenswrapper[4844]: E0126 14:16:46.691776 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b22d228-f7f6-444c-a6ff-a4b22a533906" containerName="extract-utilities" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.691787 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b22d228-f7f6-444c-a6ff-a4b22a533906" containerName="extract-utilities" Jan 26 14:16:46 crc kubenswrapper[4844]: E0126 14:16:46.691830 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb6de9f-9097-4994-a2e8-3f3d442bb95d" containerName="registry-server" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.691841 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb6de9f-9097-4994-a2e8-3f3d442bb95d" containerName="registry-server" Jan 26 14:16:46 crc kubenswrapper[4844]: E0126 14:16:46.691870 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b22d228-f7f6-444c-a6ff-a4b22a533906" containerName="registry-server" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.691892 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b22d228-f7f6-444c-a6ff-a4b22a533906" containerName="registry-server" Jan 26 14:16:46 crc kubenswrapper[4844]: E0126 14:16:46.691921 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb6de9f-9097-4994-a2e8-3f3d442bb95d" containerName="extract-content" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.691932 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb6de9f-9097-4994-a2e8-3f3d442bb95d" containerName="extract-content" Jan 26 14:16:46 crc kubenswrapper[4844]: E0126 14:16:46.691949 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c24428f-8915-4e8d-b054-14f7df0caa5b" containerName="collect-profiles" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.691959 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c24428f-8915-4e8d-b054-14f7df0caa5b" containerName="collect-profiles" Jan 26 14:16:46 crc kubenswrapper[4844]: E0126 14:16:46.691983 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb6de9f-9097-4994-a2e8-3f3d442bb95d" containerName="extract-utilities" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.691993 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb6de9f-9097-4994-a2e8-3f3d442bb95d" containerName="extract-utilities" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.692390 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb6de9f-9097-4994-a2e8-3f3d442bb95d" containerName="registry-server" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.692420 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c24428f-8915-4e8d-b054-14f7df0caa5b" containerName="collect-profiles" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.692437 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b22d228-f7f6-444c-a6ff-a4b22a533906" containerName="registry-server" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.695434 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ttxlz" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.716285 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ttxlz"] Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.885102 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4afb1850-d34c-470e-94e1-244eb7107cbc-utilities\") pod \"redhat-operators-ttxlz\" (UID: \"4afb1850-d34c-470e-94e1-244eb7107cbc\") " pod="openshift-marketplace/redhat-operators-ttxlz" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.885199 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4afb1850-d34c-470e-94e1-244eb7107cbc-catalog-content\") pod \"redhat-operators-ttxlz\" (UID: \"4afb1850-d34c-470e-94e1-244eb7107cbc\") " pod="openshift-marketplace/redhat-operators-ttxlz" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.885322 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvp4c\" (UniqueName: \"kubernetes.io/projected/4afb1850-d34c-470e-94e1-244eb7107cbc-kube-api-access-hvp4c\") pod \"redhat-operators-ttxlz\" (UID: \"4afb1850-d34c-470e-94e1-244eb7107cbc\") " pod="openshift-marketplace/redhat-operators-ttxlz" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.987448 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4afb1850-d34c-470e-94e1-244eb7107cbc-utilities\") pod \"redhat-operators-ttxlz\" (UID: \"4afb1850-d34c-470e-94e1-244eb7107cbc\") " pod="openshift-marketplace/redhat-operators-ttxlz" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.987751 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4afb1850-d34c-470e-94e1-244eb7107cbc-catalog-content\") pod \"redhat-operators-ttxlz\" (UID: \"4afb1850-d34c-470e-94e1-244eb7107cbc\") " pod="openshift-marketplace/redhat-operators-ttxlz" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.987899 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvp4c\" (UniqueName: \"kubernetes.io/projected/4afb1850-d34c-470e-94e1-244eb7107cbc-kube-api-access-hvp4c\") pod \"redhat-operators-ttxlz\" (UID: \"4afb1850-d34c-470e-94e1-244eb7107cbc\") " pod="openshift-marketplace/redhat-operators-ttxlz" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.987956 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4afb1850-d34c-470e-94e1-244eb7107cbc-utilities\") pod \"redhat-operators-ttxlz\" (UID: \"4afb1850-d34c-470e-94e1-244eb7107cbc\") " pod="openshift-marketplace/redhat-operators-ttxlz" Jan 26 14:16:46 crc kubenswrapper[4844]: I0126 14:16:46.988161 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4afb1850-d34c-470e-94e1-244eb7107cbc-catalog-content\") pod \"redhat-operators-ttxlz\" (UID: \"4afb1850-d34c-470e-94e1-244eb7107cbc\") " pod="openshift-marketplace/redhat-operators-ttxlz" Jan 26 14:16:47 crc kubenswrapper[4844]: I0126 14:16:47.009554 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvp4c\" (UniqueName: \"kubernetes.io/projected/4afb1850-d34c-470e-94e1-244eb7107cbc-kube-api-access-hvp4c\") pod \"redhat-operators-ttxlz\" (UID: \"4afb1850-d34c-470e-94e1-244eb7107cbc\") " pod="openshift-marketplace/redhat-operators-ttxlz" Jan 26 14:16:47 crc kubenswrapper[4844]: I0126 14:16:47.016607 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ttxlz" Jan 26 14:16:47 crc kubenswrapper[4844]: I0126 14:16:47.571234 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ttxlz"] Jan 26 14:16:48 crc kubenswrapper[4844]: I0126 14:16:48.144816 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttxlz" event={"ID":"4afb1850-d34c-470e-94e1-244eb7107cbc","Type":"ContainerStarted","Data":"8f5f8d6c09d4f1d81c369ace57e77857bbef2e0f749511ea2a7d3d515c0450bd"} Jan 26 14:16:49 crc kubenswrapper[4844]: I0126 14:16:49.160286 4844 generic.go:334] "Generic (PLEG): container finished" podID="4afb1850-d34c-470e-94e1-244eb7107cbc" containerID="03b2f27d0e14f0ee2e565f9193f90eb28f9c486ada379a5835f814cdc3e30b5e" exitCode=0 Jan 26 14:16:49 crc kubenswrapper[4844]: I0126 14:16:49.160353 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttxlz" event={"ID":"4afb1850-d34c-470e-94e1-244eb7107cbc","Type":"ContainerDied","Data":"03b2f27d0e14f0ee2e565f9193f90eb28f9c486ada379a5835f814cdc3e30b5e"} Jan 26 14:16:50 crc kubenswrapper[4844]: I0126 14:16:50.172913 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttxlz" event={"ID":"4afb1850-d34c-470e-94e1-244eb7107cbc","Type":"ContainerStarted","Data":"cd52758fc1a04b8419cf6a9b13f0279d634dd517b6e717d099e9e197b321ff46"} Jan 26 14:16:55 crc kubenswrapper[4844]: I0126 14:16:55.237197 4844 generic.go:334] "Generic (PLEG): container finished" podID="4afb1850-d34c-470e-94e1-244eb7107cbc" containerID="cd52758fc1a04b8419cf6a9b13f0279d634dd517b6e717d099e9e197b321ff46" exitCode=0 Jan 26 14:16:55 crc kubenswrapper[4844]: I0126 14:16:55.237258 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttxlz" event={"ID":"4afb1850-d34c-470e-94e1-244eb7107cbc","Type":"ContainerDied","Data":"cd52758fc1a04b8419cf6a9b13f0279d634dd517b6e717d099e9e197b321ff46"} Jan 26 14:16:57 crc kubenswrapper[4844]: I0126 14:16:57.267345 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttxlz" event={"ID":"4afb1850-d34c-470e-94e1-244eb7107cbc","Type":"ContainerStarted","Data":"382fcce9f2617a93d49ea0048ff8fb5c9a80387285a4d304c00b96c9ba349810"} Jan 26 14:17:07 crc kubenswrapper[4844]: I0126 14:17:07.018018 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ttxlz" Jan 26 14:17:07 crc kubenswrapper[4844]: I0126 14:17:07.018712 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ttxlz" Jan 26 14:17:07 crc kubenswrapper[4844]: I0126 14:17:07.104451 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ttxlz" Jan 26 14:17:07 crc kubenswrapper[4844]: I0126 14:17:07.120959 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ttxlz" podStartSLOduration=14.540079033 podStartE2EDuration="21.120914703s" podCreationTimestamp="2026-01-26 14:16:46 +0000 UTC" firstStartedPulling="2026-01-26 14:16:49.166441028 +0000 UTC m=+5586.099808660" lastFinishedPulling="2026-01-26 14:16:55.747276678 +0000 UTC m=+5592.680644330" observedRunningTime="2026-01-26 14:16:57.305290723 +0000 UTC m=+5594.238658375" watchObservedRunningTime="2026-01-26 14:17:07.120914703 +0000 UTC m=+5604.054282315" Jan 26 14:17:07 crc kubenswrapper[4844]: I0126 14:17:07.457648 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ttxlz" Jan 26 14:17:07 crc kubenswrapper[4844]: I0126 14:17:07.516161 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ttxlz"] Jan 26 14:17:09 crc kubenswrapper[4844]: I0126 14:17:09.430653 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ttxlz" podUID="4afb1850-d34c-470e-94e1-244eb7107cbc" containerName="registry-server" containerID="cri-o://382fcce9f2617a93d49ea0048ff8fb5c9a80387285a4d304c00b96c9ba349810" gracePeriod=2 Jan 26 14:17:10 crc kubenswrapper[4844]: I0126 14:17:10.444567 4844 generic.go:334] "Generic (PLEG): container finished" podID="4afb1850-d34c-470e-94e1-244eb7107cbc" containerID="382fcce9f2617a93d49ea0048ff8fb5c9a80387285a4d304c00b96c9ba349810" exitCode=0 Jan 26 14:17:10 crc kubenswrapper[4844]: I0126 14:17:10.444686 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttxlz" event={"ID":"4afb1850-d34c-470e-94e1-244eb7107cbc","Type":"ContainerDied","Data":"382fcce9f2617a93d49ea0048ff8fb5c9a80387285a4d304c00b96c9ba349810"} Jan 26 14:17:11 crc kubenswrapper[4844]: I0126 14:17:11.286903 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ttxlz" Jan 26 14:17:11 crc kubenswrapper[4844]: I0126 14:17:11.401727 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4afb1850-d34c-470e-94e1-244eb7107cbc-catalog-content\") pod \"4afb1850-d34c-470e-94e1-244eb7107cbc\" (UID: \"4afb1850-d34c-470e-94e1-244eb7107cbc\") " Jan 26 14:17:11 crc kubenswrapper[4844]: I0126 14:17:11.402004 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvp4c\" (UniqueName: \"kubernetes.io/projected/4afb1850-d34c-470e-94e1-244eb7107cbc-kube-api-access-hvp4c\") pod \"4afb1850-d34c-470e-94e1-244eb7107cbc\" (UID: \"4afb1850-d34c-470e-94e1-244eb7107cbc\") " Jan 26 14:17:11 crc kubenswrapper[4844]: I0126 14:17:11.402109 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4afb1850-d34c-470e-94e1-244eb7107cbc-utilities\") pod \"4afb1850-d34c-470e-94e1-244eb7107cbc\" (UID: \"4afb1850-d34c-470e-94e1-244eb7107cbc\") " Jan 26 14:17:11 crc kubenswrapper[4844]: I0126 14:17:11.404041 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4afb1850-d34c-470e-94e1-244eb7107cbc-utilities" (OuterVolumeSpecName: "utilities") pod "4afb1850-d34c-470e-94e1-244eb7107cbc" (UID: "4afb1850-d34c-470e-94e1-244eb7107cbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:17:11 crc kubenswrapper[4844]: I0126 14:17:11.408382 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4afb1850-d34c-470e-94e1-244eb7107cbc-kube-api-access-hvp4c" (OuterVolumeSpecName: "kube-api-access-hvp4c") pod "4afb1850-d34c-470e-94e1-244eb7107cbc" (UID: "4afb1850-d34c-470e-94e1-244eb7107cbc"). InnerVolumeSpecName "kube-api-access-hvp4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:17:11 crc kubenswrapper[4844]: I0126 14:17:11.462735 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttxlz" event={"ID":"4afb1850-d34c-470e-94e1-244eb7107cbc","Type":"ContainerDied","Data":"8f5f8d6c09d4f1d81c369ace57e77857bbef2e0f749511ea2a7d3d515c0450bd"} Jan 26 14:17:11 crc kubenswrapper[4844]: I0126 14:17:11.462792 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ttxlz" Jan 26 14:17:11 crc kubenswrapper[4844]: I0126 14:17:11.462801 4844 scope.go:117] "RemoveContainer" containerID="382fcce9f2617a93d49ea0048ff8fb5c9a80387285a4d304c00b96c9ba349810" Jan 26 14:17:11 crc kubenswrapper[4844]: I0126 14:17:11.494487 4844 scope.go:117] "RemoveContainer" containerID="cd52758fc1a04b8419cf6a9b13f0279d634dd517b6e717d099e9e197b321ff46" Jan 26 14:17:11 crc kubenswrapper[4844]: I0126 14:17:11.504956 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvp4c\" (UniqueName: \"kubernetes.io/projected/4afb1850-d34c-470e-94e1-244eb7107cbc-kube-api-access-hvp4c\") on node \"crc\" DevicePath \"\"" Jan 26 14:17:11 crc kubenswrapper[4844]: I0126 14:17:11.505001 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4afb1850-d34c-470e-94e1-244eb7107cbc-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:17:11 crc kubenswrapper[4844]: I0126 14:17:11.523073 4844 scope.go:117] "RemoveContainer" containerID="03b2f27d0e14f0ee2e565f9193f90eb28f9c486ada379a5835f814cdc3e30b5e" Jan 26 14:17:11 crc kubenswrapper[4844]: I0126 14:17:11.537995 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4afb1850-d34c-470e-94e1-244eb7107cbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4afb1850-d34c-470e-94e1-244eb7107cbc" (UID: "4afb1850-d34c-470e-94e1-244eb7107cbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:17:11 crc kubenswrapper[4844]: I0126 14:17:11.607736 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4afb1850-d34c-470e-94e1-244eb7107cbc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:17:11 crc kubenswrapper[4844]: I0126 14:17:11.804315 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ttxlz"] Jan 26 14:17:11 crc kubenswrapper[4844]: I0126 14:17:11.815443 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ttxlz"] Jan 26 14:17:13 crc kubenswrapper[4844]: I0126 14:17:13.329315 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4afb1850-d34c-470e-94e1-244eb7107cbc" path="/var/lib/kubelet/pods/4afb1850-d34c-470e-94e1-244eb7107cbc/volumes" Jan 26 14:17:36 crc kubenswrapper[4844]: I0126 14:17:36.365074 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:17:36 crc kubenswrapper[4844]: I0126 14:17:36.365696 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:18:06 crc kubenswrapper[4844]: I0126 14:18:06.364701 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:18:06 crc kubenswrapper[4844]: I0126 14:18:06.365417 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:18:36 crc kubenswrapper[4844]: I0126 14:18:36.364718 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:18:36 crc kubenswrapper[4844]: I0126 14:18:36.365276 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:18:36 crc kubenswrapper[4844]: I0126 14:18:36.365323 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 14:18:36 crc kubenswrapper[4844]: I0126 14:18:36.366245 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:18:36 crc kubenswrapper[4844]: I0126 14:18:36.366308 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" gracePeriod=600 Jan 26 14:18:36 crc kubenswrapper[4844]: E0126 14:18:36.501144 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:18:36 crc kubenswrapper[4844]: I0126 14:18:36.654096 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" exitCode=0 Jan 26 14:18:36 crc kubenswrapper[4844]: I0126 14:18:36.654192 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8"} Jan 26 14:18:36 crc kubenswrapper[4844]: I0126 14:18:36.654509 4844 scope.go:117] "RemoveContainer" containerID="559718bf083b95e9b7324cd4620495d37586d77e83a89c34fb9c0332383889a7" Jan 26 14:18:36 crc kubenswrapper[4844]: I0126 14:18:36.655095 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:18:36 crc kubenswrapper[4844]: E0126 14:18:36.655440 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:18:50 crc kubenswrapper[4844]: I0126 14:18:50.313271 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:18:50 crc kubenswrapper[4844]: E0126 14:18:50.314055 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:19:03 crc kubenswrapper[4844]: I0126 14:19:03.329871 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:19:03 crc kubenswrapper[4844]: E0126 14:19:03.331113 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:19:14 crc kubenswrapper[4844]: I0126 14:19:14.314735 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:19:14 crc kubenswrapper[4844]: E0126 14:19:14.317208 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:19:26 crc kubenswrapper[4844]: I0126 14:19:26.314240 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:19:26 crc kubenswrapper[4844]: E0126 14:19:26.315310 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:19:41 crc kubenswrapper[4844]: I0126 14:19:41.313228 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:19:41 crc kubenswrapper[4844]: E0126 14:19:41.314024 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:19:56 crc kubenswrapper[4844]: I0126 14:19:56.313557 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:19:56 crc kubenswrapper[4844]: E0126 14:19:56.314297 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:20:08 crc kubenswrapper[4844]: I0126 14:20:08.314157 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:20:08 crc kubenswrapper[4844]: E0126 14:20:08.315569 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:20:11 crc kubenswrapper[4844]: I0126 14:20:11.626235 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q7lf9"] Jan 26 14:20:11 crc kubenswrapper[4844]: E0126 14:20:11.627141 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4afb1850-d34c-470e-94e1-244eb7107cbc" containerName="extract-content" Jan 26 14:20:11 crc kubenswrapper[4844]: I0126 14:20:11.627154 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="4afb1850-d34c-470e-94e1-244eb7107cbc" containerName="extract-content" Jan 26 14:20:11 crc kubenswrapper[4844]: E0126 14:20:11.627186 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4afb1850-d34c-470e-94e1-244eb7107cbc" containerName="extract-utilities" Jan 26 14:20:11 crc kubenswrapper[4844]: I0126 14:20:11.627193 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="4afb1850-d34c-470e-94e1-244eb7107cbc" containerName="extract-utilities" Jan 26 14:20:11 crc kubenswrapper[4844]: E0126 14:20:11.627201 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4afb1850-d34c-470e-94e1-244eb7107cbc" containerName="registry-server" Jan 26 14:20:11 crc kubenswrapper[4844]: I0126 14:20:11.627209 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="4afb1850-d34c-470e-94e1-244eb7107cbc" containerName="registry-server" Jan 26 14:20:11 crc kubenswrapper[4844]: I0126 14:20:11.627427 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="4afb1850-d34c-470e-94e1-244eb7107cbc" containerName="registry-server" Jan 26 14:20:11 crc kubenswrapper[4844]: I0126 14:20:11.629150 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7lf9" Jan 26 14:20:11 crc kubenswrapper[4844]: I0126 14:20:11.643174 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7lf9"] Jan 26 14:20:11 crc kubenswrapper[4844]: I0126 14:20:11.727778 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6746fd8d-29c2-4b96-b607-c26dcdcd1437-utilities\") pod \"redhat-marketplace-q7lf9\" (UID: \"6746fd8d-29c2-4b96-b607-c26dcdcd1437\") " pod="openshift-marketplace/redhat-marketplace-q7lf9" Jan 26 14:20:11 crc kubenswrapper[4844]: I0126 14:20:11.727862 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6746fd8d-29c2-4b96-b607-c26dcdcd1437-catalog-content\") pod \"redhat-marketplace-q7lf9\" (UID: \"6746fd8d-29c2-4b96-b607-c26dcdcd1437\") " pod="openshift-marketplace/redhat-marketplace-q7lf9" Jan 26 14:20:11 crc kubenswrapper[4844]: I0126 14:20:11.727900 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbzlr\" (UniqueName: \"kubernetes.io/projected/6746fd8d-29c2-4b96-b607-c26dcdcd1437-kube-api-access-nbzlr\") pod \"redhat-marketplace-q7lf9\" (UID: \"6746fd8d-29c2-4b96-b607-c26dcdcd1437\") " pod="openshift-marketplace/redhat-marketplace-q7lf9" Jan 26 14:20:11 crc kubenswrapper[4844]: I0126 14:20:11.829717 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6746fd8d-29c2-4b96-b607-c26dcdcd1437-utilities\") pod \"redhat-marketplace-q7lf9\" (UID: \"6746fd8d-29c2-4b96-b607-c26dcdcd1437\") " pod="openshift-marketplace/redhat-marketplace-q7lf9" Jan 26 14:20:11 crc kubenswrapper[4844]: I0126 14:20:11.829834 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6746fd8d-29c2-4b96-b607-c26dcdcd1437-catalog-content\") pod \"redhat-marketplace-q7lf9\" (UID: \"6746fd8d-29c2-4b96-b607-c26dcdcd1437\") " pod="openshift-marketplace/redhat-marketplace-q7lf9" Jan 26 14:20:11 crc kubenswrapper[4844]: I0126 14:20:11.829873 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbzlr\" (UniqueName: \"kubernetes.io/projected/6746fd8d-29c2-4b96-b607-c26dcdcd1437-kube-api-access-nbzlr\") pod \"redhat-marketplace-q7lf9\" (UID: \"6746fd8d-29c2-4b96-b607-c26dcdcd1437\") " pod="openshift-marketplace/redhat-marketplace-q7lf9" Jan 26 14:20:11 crc kubenswrapper[4844]: I0126 14:20:11.830158 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6746fd8d-29c2-4b96-b607-c26dcdcd1437-utilities\") pod \"redhat-marketplace-q7lf9\" (UID: \"6746fd8d-29c2-4b96-b607-c26dcdcd1437\") " pod="openshift-marketplace/redhat-marketplace-q7lf9" Jan 26 14:20:11 crc kubenswrapper[4844]: I0126 14:20:11.830495 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6746fd8d-29c2-4b96-b607-c26dcdcd1437-catalog-content\") pod \"redhat-marketplace-q7lf9\" (UID: \"6746fd8d-29c2-4b96-b607-c26dcdcd1437\") " pod="openshift-marketplace/redhat-marketplace-q7lf9" Jan 26 14:20:11 crc kubenswrapper[4844]: I0126 14:20:11.848100 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbzlr\" (UniqueName: \"kubernetes.io/projected/6746fd8d-29c2-4b96-b607-c26dcdcd1437-kube-api-access-nbzlr\") pod \"redhat-marketplace-q7lf9\" (UID: \"6746fd8d-29c2-4b96-b607-c26dcdcd1437\") " pod="openshift-marketplace/redhat-marketplace-q7lf9" Jan 26 14:20:12 crc kubenswrapper[4844]: I0126 14:20:12.007181 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7lf9" Jan 26 14:20:12 crc kubenswrapper[4844]: W0126 14:20:12.491552 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6746fd8d_29c2_4b96_b607_c26dcdcd1437.slice/crio-51dfbaea9240ba978f67ebcc0027ba98d0920bf729e23cca895558d527d4f399 WatchSource:0}: Error finding container 51dfbaea9240ba978f67ebcc0027ba98d0920bf729e23cca895558d527d4f399: Status 404 returned error can't find the container with id 51dfbaea9240ba978f67ebcc0027ba98d0920bf729e23cca895558d527d4f399 Jan 26 14:20:12 crc kubenswrapper[4844]: I0126 14:20:12.503306 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7lf9"] Jan 26 14:20:12 crc kubenswrapper[4844]: I0126 14:20:12.717589 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7lf9" event={"ID":"6746fd8d-29c2-4b96-b607-c26dcdcd1437","Type":"ContainerStarted","Data":"4159b8088ae1ff03d95587cebbfa10217d8a9c5e915a80a8d6f74bbe7fb1439a"} Jan 26 14:20:12 crc kubenswrapper[4844]: I0126 14:20:12.717657 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7lf9" event={"ID":"6746fd8d-29c2-4b96-b607-c26dcdcd1437","Type":"ContainerStarted","Data":"51dfbaea9240ba978f67ebcc0027ba98d0920bf729e23cca895558d527d4f399"} Jan 26 14:20:13 crc kubenswrapper[4844]: I0126 14:20:13.731429 4844 generic.go:334] "Generic (PLEG): container finished" podID="6746fd8d-29c2-4b96-b607-c26dcdcd1437" containerID="4159b8088ae1ff03d95587cebbfa10217d8a9c5e915a80a8d6f74bbe7fb1439a" exitCode=0 Jan 26 14:20:13 crc kubenswrapper[4844]: I0126 14:20:13.731619 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7lf9" event={"ID":"6746fd8d-29c2-4b96-b607-c26dcdcd1437","Type":"ContainerDied","Data":"4159b8088ae1ff03d95587cebbfa10217d8a9c5e915a80a8d6f74bbe7fb1439a"} Jan 26 14:20:13 crc kubenswrapper[4844]: I0126 14:20:13.734688 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:20:14 crc kubenswrapper[4844]: I0126 14:20:14.741752 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7lf9" event={"ID":"6746fd8d-29c2-4b96-b607-c26dcdcd1437","Type":"ContainerStarted","Data":"d37f87d25a99d39c60d79ed7a5358f5d87eda0df7017167a36474b63eb0b8557"} Jan 26 14:20:15 crc kubenswrapper[4844]: I0126 14:20:15.760709 4844 generic.go:334] "Generic (PLEG): container finished" podID="6746fd8d-29c2-4b96-b607-c26dcdcd1437" containerID="d37f87d25a99d39c60d79ed7a5358f5d87eda0df7017167a36474b63eb0b8557" exitCode=0 Jan 26 14:20:15 crc kubenswrapper[4844]: I0126 14:20:15.760792 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7lf9" event={"ID":"6746fd8d-29c2-4b96-b607-c26dcdcd1437","Type":"ContainerDied","Data":"d37f87d25a99d39c60d79ed7a5358f5d87eda0df7017167a36474b63eb0b8557"} Jan 26 14:20:17 crc kubenswrapper[4844]: I0126 14:20:17.787970 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7lf9" event={"ID":"6746fd8d-29c2-4b96-b607-c26dcdcd1437","Type":"ContainerStarted","Data":"be18e4eedd735178769c2b505213c2cf6032820006289f8146cc81b1faf555a9"} Jan 26 14:20:17 crc kubenswrapper[4844]: I0126 14:20:17.814254 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q7lf9" podStartSLOduration=3.81375512 podStartE2EDuration="6.814232673s" podCreationTimestamp="2026-01-26 14:20:11 +0000 UTC" firstStartedPulling="2026-01-26 14:20:13.73411117 +0000 UTC m=+5790.667478812" lastFinishedPulling="2026-01-26 14:20:16.734588723 +0000 UTC m=+5793.667956365" observedRunningTime="2026-01-26 14:20:17.807037568 +0000 UTC m=+5794.740405190" watchObservedRunningTime="2026-01-26 14:20:17.814232673 +0000 UTC m=+5794.747600285" Jan 26 14:20:20 crc kubenswrapper[4844]: I0126 14:20:20.314845 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:20:20 crc kubenswrapper[4844]: E0126 14:20:20.315637 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:20:22 crc kubenswrapper[4844]: I0126 14:20:22.007466 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q7lf9" Jan 26 14:20:22 crc kubenswrapper[4844]: I0126 14:20:22.007971 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q7lf9" Jan 26 14:20:22 crc kubenswrapper[4844]: I0126 14:20:22.057514 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q7lf9" Jan 26 14:20:22 crc kubenswrapper[4844]: I0126 14:20:22.905621 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q7lf9" Jan 26 14:20:22 crc kubenswrapper[4844]: I0126 14:20:22.963690 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7lf9"] Jan 26 14:20:24 crc kubenswrapper[4844]: I0126 14:20:24.860226 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q7lf9" podUID="6746fd8d-29c2-4b96-b607-c26dcdcd1437" containerName="registry-server" containerID="cri-o://be18e4eedd735178769c2b505213c2cf6032820006289f8146cc81b1faf555a9" gracePeriod=2 Jan 26 14:20:25 crc kubenswrapper[4844]: I0126 14:20:25.878009 4844 generic.go:334] "Generic (PLEG): container finished" podID="6746fd8d-29c2-4b96-b607-c26dcdcd1437" containerID="be18e4eedd735178769c2b505213c2cf6032820006289f8146cc81b1faf555a9" exitCode=0 Jan 26 14:20:25 crc kubenswrapper[4844]: I0126 14:20:25.878338 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7lf9" event={"ID":"6746fd8d-29c2-4b96-b607-c26dcdcd1437","Type":"ContainerDied","Data":"be18e4eedd735178769c2b505213c2cf6032820006289f8146cc81b1faf555a9"} Jan 26 14:20:26 crc kubenswrapper[4844]: I0126 14:20:26.539737 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7lf9" Jan 26 14:20:26 crc kubenswrapper[4844]: I0126 14:20:26.694389 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbzlr\" (UniqueName: \"kubernetes.io/projected/6746fd8d-29c2-4b96-b607-c26dcdcd1437-kube-api-access-nbzlr\") pod \"6746fd8d-29c2-4b96-b607-c26dcdcd1437\" (UID: \"6746fd8d-29c2-4b96-b607-c26dcdcd1437\") " Jan 26 14:20:26 crc kubenswrapper[4844]: I0126 14:20:26.694570 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6746fd8d-29c2-4b96-b607-c26dcdcd1437-utilities\") pod \"6746fd8d-29c2-4b96-b607-c26dcdcd1437\" (UID: \"6746fd8d-29c2-4b96-b607-c26dcdcd1437\") " Jan 26 14:20:26 crc kubenswrapper[4844]: I0126 14:20:26.694663 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6746fd8d-29c2-4b96-b607-c26dcdcd1437-catalog-content\") pod \"6746fd8d-29c2-4b96-b607-c26dcdcd1437\" (UID: \"6746fd8d-29c2-4b96-b607-c26dcdcd1437\") " Jan 26 14:20:26 crc kubenswrapper[4844]: I0126 14:20:26.695854 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6746fd8d-29c2-4b96-b607-c26dcdcd1437-utilities" (OuterVolumeSpecName: "utilities") pod "6746fd8d-29c2-4b96-b607-c26dcdcd1437" (UID: "6746fd8d-29c2-4b96-b607-c26dcdcd1437"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:20:26 crc kubenswrapper[4844]: I0126 14:20:26.701722 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6746fd8d-29c2-4b96-b607-c26dcdcd1437-kube-api-access-nbzlr" (OuterVolumeSpecName: "kube-api-access-nbzlr") pod "6746fd8d-29c2-4b96-b607-c26dcdcd1437" (UID: "6746fd8d-29c2-4b96-b607-c26dcdcd1437"). InnerVolumeSpecName "kube-api-access-nbzlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:20:26 crc kubenswrapper[4844]: I0126 14:20:26.736266 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6746fd8d-29c2-4b96-b607-c26dcdcd1437-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6746fd8d-29c2-4b96-b607-c26dcdcd1437" (UID: "6746fd8d-29c2-4b96-b607-c26dcdcd1437"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:20:26 crc kubenswrapper[4844]: I0126 14:20:26.797020 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbzlr\" (UniqueName: \"kubernetes.io/projected/6746fd8d-29c2-4b96-b607-c26dcdcd1437-kube-api-access-nbzlr\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:26 crc kubenswrapper[4844]: I0126 14:20:26.797064 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6746fd8d-29c2-4b96-b607-c26dcdcd1437-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:26 crc kubenswrapper[4844]: I0126 14:20:26.797077 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6746fd8d-29c2-4b96-b607-c26dcdcd1437-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:26 crc kubenswrapper[4844]: I0126 14:20:26.890084 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7lf9" event={"ID":"6746fd8d-29c2-4b96-b607-c26dcdcd1437","Type":"ContainerDied","Data":"51dfbaea9240ba978f67ebcc0027ba98d0920bf729e23cca895558d527d4f399"} Jan 26 14:20:26 crc kubenswrapper[4844]: I0126 14:20:26.890131 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7lf9" Jan 26 14:20:26 crc kubenswrapper[4844]: I0126 14:20:26.890138 4844 scope.go:117] "RemoveContainer" containerID="be18e4eedd735178769c2b505213c2cf6032820006289f8146cc81b1faf555a9" Jan 26 14:20:26 crc kubenswrapper[4844]: I0126 14:20:26.929263 4844 scope.go:117] "RemoveContainer" containerID="d37f87d25a99d39c60d79ed7a5358f5d87eda0df7017167a36474b63eb0b8557" Jan 26 14:20:26 crc kubenswrapper[4844]: I0126 14:20:26.931624 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7lf9"] Jan 26 14:20:26 crc kubenswrapper[4844]: I0126 14:20:26.942779 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7lf9"] Jan 26 14:20:26 crc kubenswrapper[4844]: I0126 14:20:26.956575 4844 scope.go:117] "RemoveContainer" containerID="4159b8088ae1ff03d95587cebbfa10217d8a9c5e915a80a8d6f74bbe7fb1439a" Jan 26 14:20:27 crc kubenswrapper[4844]: I0126 14:20:27.325702 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6746fd8d-29c2-4b96-b607-c26dcdcd1437" path="/var/lib/kubelet/pods/6746fd8d-29c2-4b96-b607-c26dcdcd1437/volumes" Jan 26 14:20:32 crc kubenswrapper[4844]: I0126 14:20:32.313416 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:20:32 crc kubenswrapper[4844]: E0126 14:20:32.314221 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:20:43 crc kubenswrapper[4844]: I0126 14:20:43.325195 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:20:43 crc kubenswrapper[4844]: E0126 14:20:43.326437 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:20:55 crc kubenswrapper[4844]: I0126 14:20:55.313990 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:20:55 crc kubenswrapper[4844]: E0126 14:20:55.315262 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:21:07 crc kubenswrapper[4844]: I0126 14:21:07.315536 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:21:07 crc kubenswrapper[4844]: E0126 14:21:07.317050 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:21:21 crc kubenswrapper[4844]: I0126 14:21:21.313924 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:21:21 crc kubenswrapper[4844]: E0126 14:21:21.316516 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:21:36 crc kubenswrapper[4844]: I0126 14:21:36.313920 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:21:36 crc kubenswrapper[4844]: E0126 14:21:36.315044 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:21:51 crc kubenswrapper[4844]: I0126 14:21:51.318000 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:21:51 crc kubenswrapper[4844]: E0126 14:21:51.319186 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:22:06 crc kubenswrapper[4844]: I0126 14:22:06.313470 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:22:06 crc kubenswrapper[4844]: E0126 14:22:06.314198 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:22:19 crc kubenswrapper[4844]: I0126 14:22:19.315306 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:22:19 crc kubenswrapper[4844]: E0126 14:22:19.316135 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:22:30 crc kubenswrapper[4844]: I0126 14:22:30.312842 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:22:30 crc kubenswrapper[4844]: E0126 14:22:30.313460 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:22:43 crc kubenswrapper[4844]: I0126 14:22:43.322399 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:22:43 crc kubenswrapper[4844]: E0126 14:22:43.323322 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:22:54 crc kubenswrapper[4844]: I0126 14:22:54.313350 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:22:54 crc kubenswrapper[4844]: E0126 14:22:54.314402 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:23:05 crc kubenswrapper[4844]: I0126 14:23:05.314106 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:23:05 crc kubenswrapper[4844]: E0126 14:23:05.314893 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:23:20 crc kubenswrapper[4844]: I0126 14:23:20.314644 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:23:20 crc kubenswrapper[4844]: E0126 14:23:20.315672 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:23:34 crc kubenswrapper[4844]: I0126 14:23:34.313745 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:23:34 crc kubenswrapper[4844]: E0126 14:23:34.314724 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:23:45 crc kubenswrapper[4844]: I0126 14:23:45.313493 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:23:46 crc kubenswrapper[4844]: I0126 14:23:46.317331 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"5f8eeb5cfa99d5ce0f9d0308a88bd5f39ff9898b65fecb6afb80daade636480f"} Jan 26 14:24:05 crc kubenswrapper[4844]: E0126 14:24:05.676934 4844 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.142:55522->38.102.83.142:35401: read tcp 38.102.83.142:55522->38.102.83.142:35401: read: connection reset by peer Jan 26 14:25:18 crc kubenswrapper[4844]: I0126 14:25:18.567832 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6t6ll"] Jan 26 14:25:18 crc kubenswrapper[4844]: E0126 14:25:18.569029 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6746fd8d-29c2-4b96-b607-c26dcdcd1437" containerName="extract-utilities" Jan 26 14:25:18 crc kubenswrapper[4844]: I0126 14:25:18.569044 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="6746fd8d-29c2-4b96-b607-c26dcdcd1437" containerName="extract-utilities" Jan 26 14:25:18 crc kubenswrapper[4844]: E0126 14:25:18.569060 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6746fd8d-29c2-4b96-b607-c26dcdcd1437" containerName="extract-content" Jan 26 14:25:18 crc kubenswrapper[4844]: I0126 14:25:18.569065 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="6746fd8d-29c2-4b96-b607-c26dcdcd1437" containerName="extract-content" Jan 26 14:25:18 crc kubenswrapper[4844]: E0126 14:25:18.569077 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6746fd8d-29c2-4b96-b607-c26dcdcd1437" containerName="registry-server" Jan 26 14:25:18 crc kubenswrapper[4844]: I0126 14:25:18.569084 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="6746fd8d-29c2-4b96-b607-c26dcdcd1437" containerName="registry-server" Jan 26 14:25:18 crc kubenswrapper[4844]: I0126 14:25:18.569289 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="6746fd8d-29c2-4b96-b607-c26dcdcd1437" containerName="registry-server" Jan 26 14:25:18 crc kubenswrapper[4844]: I0126 14:25:18.570714 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6t6ll" Jan 26 14:25:18 crc kubenswrapper[4844]: I0126 14:25:18.588049 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6t6ll"] Jan 26 14:25:18 crc kubenswrapper[4844]: I0126 14:25:18.598539 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4vld\" (UniqueName: \"kubernetes.io/projected/052d223b-c8e1-4303-b3dd-4856f68f9ee1-kube-api-access-s4vld\") pod \"certified-operators-6t6ll\" (UID: \"052d223b-c8e1-4303-b3dd-4856f68f9ee1\") " pod="openshift-marketplace/certified-operators-6t6ll" Jan 26 14:25:18 crc kubenswrapper[4844]: I0126 14:25:18.598903 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052d223b-c8e1-4303-b3dd-4856f68f9ee1-utilities\") pod \"certified-operators-6t6ll\" (UID: \"052d223b-c8e1-4303-b3dd-4856f68f9ee1\") " pod="openshift-marketplace/certified-operators-6t6ll" Jan 26 14:25:18 crc kubenswrapper[4844]: I0126 14:25:18.598994 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052d223b-c8e1-4303-b3dd-4856f68f9ee1-catalog-content\") pod \"certified-operators-6t6ll\" (UID: \"052d223b-c8e1-4303-b3dd-4856f68f9ee1\") " pod="openshift-marketplace/certified-operators-6t6ll" Jan 26 14:25:18 crc kubenswrapper[4844]: I0126 14:25:18.701129 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052d223b-c8e1-4303-b3dd-4856f68f9ee1-catalog-content\") pod \"certified-operators-6t6ll\" (UID: \"052d223b-c8e1-4303-b3dd-4856f68f9ee1\") " pod="openshift-marketplace/certified-operators-6t6ll" Jan 26 14:25:18 crc kubenswrapper[4844]: I0126 14:25:18.701240 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4vld\" (UniqueName: \"kubernetes.io/projected/052d223b-c8e1-4303-b3dd-4856f68f9ee1-kube-api-access-s4vld\") pod \"certified-operators-6t6ll\" (UID: \"052d223b-c8e1-4303-b3dd-4856f68f9ee1\") " pod="openshift-marketplace/certified-operators-6t6ll" Jan 26 14:25:18 crc kubenswrapper[4844]: I0126 14:25:18.701361 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052d223b-c8e1-4303-b3dd-4856f68f9ee1-utilities\") pod \"certified-operators-6t6ll\" (UID: \"052d223b-c8e1-4303-b3dd-4856f68f9ee1\") " pod="openshift-marketplace/certified-operators-6t6ll" Jan 26 14:25:18 crc kubenswrapper[4844]: I0126 14:25:18.701785 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052d223b-c8e1-4303-b3dd-4856f68f9ee1-utilities\") pod \"certified-operators-6t6ll\" (UID: \"052d223b-c8e1-4303-b3dd-4856f68f9ee1\") " pod="openshift-marketplace/certified-operators-6t6ll" Jan 26 14:25:18 crc kubenswrapper[4844]: I0126 14:25:18.701991 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052d223b-c8e1-4303-b3dd-4856f68f9ee1-catalog-content\") pod \"certified-operators-6t6ll\" (UID: \"052d223b-c8e1-4303-b3dd-4856f68f9ee1\") " pod="openshift-marketplace/certified-operators-6t6ll" Jan 26 14:25:18 crc kubenswrapper[4844]: I0126 14:25:18.720643 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4vld\" (UniqueName: \"kubernetes.io/projected/052d223b-c8e1-4303-b3dd-4856f68f9ee1-kube-api-access-s4vld\") pod \"certified-operators-6t6ll\" (UID: \"052d223b-c8e1-4303-b3dd-4856f68f9ee1\") " pod="openshift-marketplace/certified-operators-6t6ll" Jan 26 14:25:18 crc kubenswrapper[4844]: I0126 14:25:18.888055 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6t6ll" Jan 26 14:25:19 crc kubenswrapper[4844]: I0126 14:25:19.365679 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6t6ll"] Jan 26 14:25:19 crc kubenswrapper[4844]: I0126 14:25:19.385031 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6t6ll" event={"ID":"052d223b-c8e1-4303-b3dd-4856f68f9ee1","Type":"ContainerStarted","Data":"9198a318c7ad478ab4bb67a102ad1686647cee40afc21a60b2e9c70aa9e7872f"} Jan 26 14:25:20 crc kubenswrapper[4844]: I0126 14:25:20.399272 4844 generic.go:334] "Generic (PLEG): container finished" podID="052d223b-c8e1-4303-b3dd-4856f68f9ee1" containerID="7d87067fb40090edc6892bb60172c4d753f8fc24235bc33a142496de878c9746" exitCode=0 Jan 26 14:25:20 crc kubenswrapper[4844]: I0126 14:25:20.399396 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6t6ll" event={"ID":"052d223b-c8e1-4303-b3dd-4856f68f9ee1","Type":"ContainerDied","Data":"7d87067fb40090edc6892bb60172c4d753f8fc24235bc33a142496de878c9746"} Jan 26 14:25:20 crc kubenswrapper[4844]: I0126 14:25:20.402456 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:25:21 crc kubenswrapper[4844]: I0126 14:25:21.946101 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bbrnw"] Jan 26 14:25:21 crc kubenswrapper[4844]: I0126 14:25:21.948805 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbrnw" Jan 26 14:25:21 crc kubenswrapper[4844]: I0126 14:25:21.962384 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bbrnw"] Jan 26 14:25:22 crc kubenswrapper[4844]: I0126 14:25:22.083113 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sd6g\" (UniqueName: \"kubernetes.io/projected/1ef12174-dab3-42ad-8a0a-9982d70f4f62-kube-api-access-4sd6g\") pod \"community-operators-bbrnw\" (UID: \"1ef12174-dab3-42ad-8a0a-9982d70f4f62\") " pod="openshift-marketplace/community-operators-bbrnw" Jan 26 14:25:22 crc kubenswrapper[4844]: I0126 14:25:22.083331 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef12174-dab3-42ad-8a0a-9982d70f4f62-catalog-content\") pod \"community-operators-bbrnw\" (UID: \"1ef12174-dab3-42ad-8a0a-9982d70f4f62\") " pod="openshift-marketplace/community-operators-bbrnw" Jan 26 14:25:22 crc kubenswrapper[4844]: I0126 14:25:22.083420 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef12174-dab3-42ad-8a0a-9982d70f4f62-utilities\") pod \"community-operators-bbrnw\" (UID: \"1ef12174-dab3-42ad-8a0a-9982d70f4f62\") " pod="openshift-marketplace/community-operators-bbrnw" Jan 26 14:25:22 crc kubenswrapper[4844]: I0126 14:25:22.185665 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sd6g\" (UniqueName: \"kubernetes.io/projected/1ef12174-dab3-42ad-8a0a-9982d70f4f62-kube-api-access-4sd6g\") pod \"community-operators-bbrnw\" (UID: \"1ef12174-dab3-42ad-8a0a-9982d70f4f62\") " pod="openshift-marketplace/community-operators-bbrnw" Jan 26 14:25:22 crc kubenswrapper[4844]: I0126 14:25:22.185830 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef12174-dab3-42ad-8a0a-9982d70f4f62-catalog-content\") pod \"community-operators-bbrnw\" (UID: \"1ef12174-dab3-42ad-8a0a-9982d70f4f62\") " pod="openshift-marketplace/community-operators-bbrnw" Jan 26 14:25:22 crc kubenswrapper[4844]: I0126 14:25:22.185908 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef12174-dab3-42ad-8a0a-9982d70f4f62-utilities\") pod \"community-operators-bbrnw\" (UID: \"1ef12174-dab3-42ad-8a0a-9982d70f4f62\") " pod="openshift-marketplace/community-operators-bbrnw" Jan 26 14:25:22 crc kubenswrapper[4844]: I0126 14:25:22.186431 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef12174-dab3-42ad-8a0a-9982d70f4f62-catalog-content\") pod \"community-operators-bbrnw\" (UID: \"1ef12174-dab3-42ad-8a0a-9982d70f4f62\") " pod="openshift-marketplace/community-operators-bbrnw" Jan 26 14:25:22 crc kubenswrapper[4844]: I0126 14:25:22.186720 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef12174-dab3-42ad-8a0a-9982d70f4f62-utilities\") pod \"community-operators-bbrnw\" (UID: \"1ef12174-dab3-42ad-8a0a-9982d70f4f62\") " pod="openshift-marketplace/community-operators-bbrnw" Jan 26 14:25:22 crc kubenswrapper[4844]: I0126 14:25:22.207118 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sd6g\" (UniqueName: \"kubernetes.io/projected/1ef12174-dab3-42ad-8a0a-9982d70f4f62-kube-api-access-4sd6g\") pod \"community-operators-bbrnw\" (UID: \"1ef12174-dab3-42ad-8a0a-9982d70f4f62\") " pod="openshift-marketplace/community-operators-bbrnw" Jan 26 14:25:22 crc kubenswrapper[4844]: I0126 14:25:22.285470 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbrnw" Jan 26 14:25:22 crc kubenswrapper[4844]: I0126 14:25:22.437016 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6t6ll" event={"ID":"052d223b-c8e1-4303-b3dd-4856f68f9ee1","Type":"ContainerStarted","Data":"83e402a59f29d8e9b0b04de5ea9a6bac426fd34c629e9832467a819a608a509b"} Jan 26 14:25:22 crc kubenswrapper[4844]: I0126 14:25:22.846466 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bbrnw"] Jan 26 14:25:23 crc kubenswrapper[4844]: I0126 14:25:23.454055 4844 generic.go:334] "Generic (PLEG): container finished" podID="052d223b-c8e1-4303-b3dd-4856f68f9ee1" containerID="83e402a59f29d8e9b0b04de5ea9a6bac426fd34c629e9832467a819a608a509b" exitCode=0 Jan 26 14:25:23 crc kubenswrapper[4844]: I0126 14:25:23.454125 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6t6ll" event={"ID":"052d223b-c8e1-4303-b3dd-4856f68f9ee1","Type":"ContainerDied","Data":"83e402a59f29d8e9b0b04de5ea9a6bac426fd34c629e9832467a819a608a509b"} Jan 26 14:25:23 crc kubenswrapper[4844]: I0126 14:25:23.456662 4844 generic.go:334] "Generic (PLEG): container finished" podID="1ef12174-dab3-42ad-8a0a-9982d70f4f62" containerID="e169d785ab122ff49a38c0aacf717de571764f89dd0db4bd4712207317afaff3" exitCode=0 Jan 26 14:25:23 crc kubenswrapper[4844]: I0126 14:25:23.456708 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbrnw" event={"ID":"1ef12174-dab3-42ad-8a0a-9982d70f4f62","Type":"ContainerDied","Data":"e169d785ab122ff49a38c0aacf717de571764f89dd0db4bd4712207317afaff3"} Jan 26 14:25:23 crc kubenswrapper[4844]: I0126 14:25:23.456748 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbrnw" event={"ID":"1ef12174-dab3-42ad-8a0a-9982d70f4f62","Type":"ContainerStarted","Data":"7032d71facb65502b0fe68d6025c4600075d3c4a7096028d61ef47d1196008d8"} Jan 26 14:25:24 crc kubenswrapper[4844]: I0126 14:25:24.473376 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbrnw" event={"ID":"1ef12174-dab3-42ad-8a0a-9982d70f4f62","Type":"ContainerStarted","Data":"c55609c7667662ab58faca1904a727bff27c0d79a0f1bf8573b2fdd8d28f0a48"} Jan 26 14:25:24 crc kubenswrapper[4844]: I0126 14:25:24.483460 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6t6ll" event={"ID":"052d223b-c8e1-4303-b3dd-4856f68f9ee1","Type":"ContainerStarted","Data":"951059adbb686255dd7881af7560037bdba8066a1a25dbd9bebe95d3fcba1824"} Jan 26 14:25:26 crc kubenswrapper[4844]: I0126 14:25:26.504202 4844 generic.go:334] "Generic (PLEG): container finished" podID="1ef12174-dab3-42ad-8a0a-9982d70f4f62" containerID="c55609c7667662ab58faca1904a727bff27c0d79a0f1bf8573b2fdd8d28f0a48" exitCode=0 Jan 26 14:25:26 crc kubenswrapper[4844]: I0126 14:25:26.504412 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbrnw" event={"ID":"1ef12174-dab3-42ad-8a0a-9982d70f4f62","Type":"ContainerDied","Data":"c55609c7667662ab58faca1904a727bff27c0d79a0f1bf8573b2fdd8d28f0a48"} Jan 26 14:25:26 crc kubenswrapper[4844]: I0126 14:25:26.534441 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6t6ll" podStartSLOduration=5.019151161 podStartE2EDuration="8.534399286s" podCreationTimestamp="2026-01-26 14:25:18 +0000 UTC" firstStartedPulling="2026-01-26 14:25:20.401943943 +0000 UTC m=+6097.335311595" lastFinishedPulling="2026-01-26 14:25:23.917192108 +0000 UTC m=+6100.850559720" observedRunningTime="2026-01-26 14:25:24.520772377 +0000 UTC m=+6101.454139999" watchObservedRunningTime="2026-01-26 14:25:26.534399286 +0000 UTC m=+6103.467766908" Jan 26 14:25:27 crc kubenswrapper[4844]: I0126 14:25:27.519105 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbrnw" event={"ID":"1ef12174-dab3-42ad-8a0a-9982d70f4f62","Type":"ContainerStarted","Data":"2a63e4b6deeddb1d4a3d6375e3b2685ec5ac56c9a83679dab4e4e97bd7bb9fdf"} Jan 26 14:25:27 crc kubenswrapper[4844]: I0126 14:25:27.543111 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bbrnw" podStartSLOduration=3.114692908 podStartE2EDuration="6.543094891s" podCreationTimestamp="2026-01-26 14:25:21 +0000 UTC" firstStartedPulling="2026-01-26 14:25:23.470370079 +0000 UTC m=+6100.403737731" lastFinishedPulling="2026-01-26 14:25:26.898772102 +0000 UTC m=+6103.832139714" observedRunningTime="2026-01-26 14:25:27.534904873 +0000 UTC m=+6104.468272485" watchObservedRunningTime="2026-01-26 14:25:27.543094891 +0000 UTC m=+6104.476462503" Jan 26 14:25:28 crc kubenswrapper[4844]: I0126 14:25:28.888553 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6t6ll" Jan 26 14:25:28 crc kubenswrapper[4844]: I0126 14:25:28.888950 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6t6ll" Jan 26 14:25:28 crc kubenswrapper[4844]: I0126 14:25:28.969521 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6t6ll" Jan 26 14:25:29 crc kubenswrapper[4844]: I0126 14:25:29.589154 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6t6ll" Jan 26 14:25:31 crc kubenswrapper[4844]: I0126 14:25:31.136060 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6t6ll"] Jan 26 14:25:31 crc kubenswrapper[4844]: I0126 14:25:31.562743 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6t6ll" podUID="052d223b-c8e1-4303-b3dd-4856f68f9ee1" containerName="registry-server" containerID="cri-o://951059adbb686255dd7881af7560037bdba8066a1a25dbd9bebe95d3fcba1824" gracePeriod=2 Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.113589 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6t6ll" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.214773 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052d223b-c8e1-4303-b3dd-4856f68f9ee1-utilities\") pod \"052d223b-c8e1-4303-b3dd-4856f68f9ee1\" (UID: \"052d223b-c8e1-4303-b3dd-4856f68f9ee1\") " Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.215797 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/052d223b-c8e1-4303-b3dd-4856f68f9ee1-utilities" (OuterVolumeSpecName: "utilities") pod "052d223b-c8e1-4303-b3dd-4856f68f9ee1" (UID: "052d223b-c8e1-4303-b3dd-4856f68f9ee1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.216084 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052d223b-c8e1-4303-b3dd-4856f68f9ee1-catalog-content\") pod \"052d223b-c8e1-4303-b3dd-4856f68f9ee1\" (UID: \"052d223b-c8e1-4303-b3dd-4856f68f9ee1\") " Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.216225 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4vld\" (UniqueName: \"kubernetes.io/projected/052d223b-c8e1-4303-b3dd-4856f68f9ee1-kube-api-access-s4vld\") pod \"052d223b-c8e1-4303-b3dd-4856f68f9ee1\" (UID: \"052d223b-c8e1-4303-b3dd-4856f68f9ee1\") " Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.217008 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052d223b-c8e1-4303-b3dd-4856f68f9ee1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.222136 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052d223b-c8e1-4303-b3dd-4856f68f9ee1-kube-api-access-s4vld" (OuterVolumeSpecName: "kube-api-access-s4vld") pod "052d223b-c8e1-4303-b3dd-4856f68f9ee1" (UID: "052d223b-c8e1-4303-b3dd-4856f68f9ee1"). InnerVolumeSpecName "kube-api-access-s4vld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.282812 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/052d223b-c8e1-4303-b3dd-4856f68f9ee1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "052d223b-c8e1-4303-b3dd-4856f68f9ee1" (UID: "052d223b-c8e1-4303-b3dd-4856f68f9ee1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.286141 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bbrnw" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.286223 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bbrnw" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.318940 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052d223b-c8e1-4303-b3dd-4856f68f9ee1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.319211 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4vld\" (UniqueName: \"kubernetes.io/projected/052d223b-c8e1-4303-b3dd-4856f68f9ee1-kube-api-access-s4vld\") on node \"crc\" DevicePath \"\"" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.336377 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bbrnw" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.574383 4844 generic.go:334] "Generic (PLEG): container finished" podID="052d223b-c8e1-4303-b3dd-4856f68f9ee1" containerID="951059adbb686255dd7881af7560037bdba8066a1a25dbd9bebe95d3fcba1824" exitCode=0 Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.574891 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6t6ll" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.574880 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6t6ll" event={"ID":"052d223b-c8e1-4303-b3dd-4856f68f9ee1","Type":"ContainerDied","Data":"951059adbb686255dd7881af7560037bdba8066a1a25dbd9bebe95d3fcba1824"} Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.574972 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6t6ll" event={"ID":"052d223b-c8e1-4303-b3dd-4856f68f9ee1","Type":"ContainerDied","Data":"9198a318c7ad478ab4bb67a102ad1686647cee40afc21a60b2e9c70aa9e7872f"} Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.575017 4844 scope.go:117] "RemoveContainer" containerID="951059adbb686255dd7881af7560037bdba8066a1a25dbd9bebe95d3fcba1824" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.611742 4844 scope.go:117] "RemoveContainer" containerID="83e402a59f29d8e9b0b04de5ea9a6bac426fd34c629e9832467a819a608a509b" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.620825 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6t6ll"] Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.633694 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6t6ll"] Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.633870 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bbrnw" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.639398 4844 scope.go:117] "RemoveContainer" containerID="7d87067fb40090edc6892bb60172c4d753f8fc24235bc33a142496de878c9746" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.691553 4844 scope.go:117] "RemoveContainer" containerID="951059adbb686255dd7881af7560037bdba8066a1a25dbd9bebe95d3fcba1824" Jan 26 14:25:32 crc kubenswrapper[4844]: E0126 14:25:32.691988 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"951059adbb686255dd7881af7560037bdba8066a1a25dbd9bebe95d3fcba1824\": container with ID starting with 951059adbb686255dd7881af7560037bdba8066a1a25dbd9bebe95d3fcba1824 not found: ID does not exist" containerID="951059adbb686255dd7881af7560037bdba8066a1a25dbd9bebe95d3fcba1824" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.692047 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"951059adbb686255dd7881af7560037bdba8066a1a25dbd9bebe95d3fcba1824"} err="failed to get container status \"951059adbb686255dd7881af7560037bdba8066a1a25dbd9bebe95d3fcba1824\": rpc error: code = NotFound desc = could not find container \"951059adbb686255dd7881af7560037bdba8066a1a25dbd9bebe95d3fcba1824\": container with ID starting with 951059adbb686255dd7881af7560037bdba8066a1a25dbd9bebe95d3fcba1824 not found: ID does not exist" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.692073 4844 scope.go:117] "RemoveContainer" containerID="83e402a59f29d8e9b0b04de5ea9a6bac426fd34c629e9832467a819a608a509b" Jan 26 14:25:32 crc kubenswrapper[4844]: E0126 14:25:32.692405 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83e402a59f29d8e9b0b04de5ea9a6bac426fd34c629e9832467a819a608a509b\": container with ID starting with 83e402a59f29d8e9b0b04de5ea9a6bac426fd34c629e9832467a819a608a509b not found: ID does not exist" containerID="83e402a59f29d8e9b0b04de5ea9a6bac426fd34c629e9832467a819a608a509b" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.692439 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83e402a59f29d8e9b0b04de5ea9a6bac426fd34c629e9832467a819a608a509b"} err="failed to get container status \"83e402a59f29d8e9b0b04de5ea9a6bac426fd34c629e9832467a819a608a509b\": rpc error: code = NotFound desc = could not find container \"83e402a59f29d8e9b0b04de5ea9a6bac426fd34c629e9832467a819a608a509b\": container with ID starting with 83e402a59f29d8e9b0b04de5ea9a6bac426fd34c629e9832467a819a608a509b not found: ID does not exist" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.692461 4844 scope.go:117] "RemoveContainer" containerID="7d87067fb40090edc6892bb60172c4d753f8fc24235bc33a142496de878c9746" Jan 26 14:25:32 crc kubenswrapper[4844]: E0126 14:25:32.692783 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d87067fb40090edc6892bb60172c4d753f8fc24235bc33a142496de878c9746\": container with ID starting with 7d87067fb40090edc6892bb60172c4d753f8fc24235bc33a142496de878c9746 not found: ID does not exist" containerID="7d87067fb40090edc6892bb60172c4d753f8fc24235bc33a142496de878c9746" Jan 26 14:25:32 crc kubenswrapper[4844]: I0126 14:25:32.692847 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d87067fb40090edc6892bb60172c4d753f8fc24235bc33a142496de878c9746"} err="failed to get container status \"7d87067fb40090edc6892bb60172c4d753f8fc24235bc33a142496de878c9746\": rpc error: code = NotFound desc = could not find container \"7d87067fb40090edc6892bb60172c4d753f8fc24235bc33a142496de878c9746\": container with ID starting with 7d87067fb40090edc6892bb60172c4d753f8fc24235bc33a142496de878c9746 not found: ID does not exist" Jan 26 14:25:33 crc kubenswrapper[4844]: I0126 14:25:33.327528 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="052d223b-c8e1-4303-b3dd-4856f68f9ee1" path="/var/lib/kubelet/pods/052d223b-c8e1-4303-b3dd-4856f68f9ee1/volumes" Jan 26 14:25:34 crc kubenswrapper[4844]: I0126 14:25:34.740790 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bbrnw"] Jan 26 14:25:34 crc kubenswrapper[4844]: I0126 14:25:34.741519 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bbrnw" podUID="1ef12174-dab3-42ad-8a0a-9982d70f4f62" containerName="registry-server" containerID="cri-o://2a63e4b6deeddb1d4a3d6375e3b2685ec5ac56c9a83679dab4e4e97bd7bb9fdf" gracePeriod=2 Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.343018 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbrnw" Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.390375 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef12174-dab3-42ad-8a0a-9982d70f4f62-catalog-content\") pod \"1ef12174-dab3-42ad-8a0a-9982d70f4f62\" (UID: \"1ef12174-dab3-42ad-8a0a-9982d70f4f62\") " Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.390437 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef12174-dab3-42ad-8a0a-9982d70f4f62-utilities\") pod \"1ef12174-dab3-42ad-8a0a-9982d70f4f62\" (UID: \"1ef12174-dab3-42ad-8a0a-9982d70f4f62\") " Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.390641 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sd6g\" (UniqueName: \"kubernetes.io/projected/1ef12174-dab3-42ad-8a0a-9982d70f4f62-kube-api-access-4sd6g\") pod \"1ef12174-dab3-42ad-8a0a-9982d70f4f62\" (UID: \"1ef12174-dab3-42ad-8a0a-9982d70f4f62\") " Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.391464 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ef12174-dab3-42ad-8a0a-9982d70f4f62-utilities" (OuterVolumeSpecName: "utilities") pod "1ef12174-dab3-42ad-8a0a-9982d70f4f62" (UID: "1ef12174-dab3-42ad-8a0a-9982d70f4f62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.397260 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef12174-dab3-42ad-8a0a-9982d70f4f62-kube-api-access-4sd6g" (OuterVolumeSpecName: "kube-api-access-4sd6g") pod "1ef12174-dab3-42ad-8a0a-9982d70f4f62" (UID: "1ef12174-dab3-42ad-8a0a-9982d70f4f62"). InnerVolumeSpecName "kube-api-access-4sd6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.450959 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ef12174-dab3-42ad-8a0a-9982d70f4f62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ef12174-dab3-42ad-8a0a-9982d70f4f62" (UID: "1ef12174-dab3-42ad-8a0a-9982d70f4f62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.493585 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef12174-dab3-42ad-8a0a-9982d70f4f62-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.493638 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef12174-dab3-42ad-8a0a-9982d70f4f62-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.493649 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sd6g\" (UniqueName: \"kubernetes.io/projected/1ef12174-dab3-42ad-8a0a-9982d70f4f62-kube-api-access-4sd6g\") on node \"crc\" DevicePath \"\"" Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.627659 4844 generic.go:334] "Generic (PLEG): container finished" podID="1ef12174-dab3-42ad-8a0a-9982d70f4f62" containerID="2a63e4b6deeddb1d4a3d6375e3b2685ec5ac56c9a83679dab4e4e97bd7bb9fdf" exitCode=0 Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.627699 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbrnw" event={"ID":"1ef12174-dab3-42ad-8a0a-9982d70f4f62","Type":"ContainerDied","Data":"2a63e4b6deeddb1d4a3d6375e3b2685ec5ac56c9a83679dab4e4e97bd7bb9fdf"} Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.627733 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbrnw" Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.627747 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbrnw" event={"ID":"1ef12174-dab3-42ad-8a0a-9982d70f4f62","Type":"ContainerDied","Data":"7032d71facb65502b0fe68d6025c4600075d3c4a7096028d61ef47d1196008d8"} Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.627771 4844 scope.go:117] "RemoveContainer" containerID="2a63e4b6deeddb1d4a3d6375e3b2685ec5ac56c9a83679dab4e4e97bd7bb9fdf" Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.662293 4844 scope.go:117] "RemoveContainer" containerID="c55609c7667662ab58faca1904a727bff27c0d79a0f1bf8573b2fdd8d28f0a48" Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.668253 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bbrnw"] Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.677503 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bbrnw"] Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.703632 4844 scope.go:117] "RemoveContainer" containerID="e169d785ab122ff49a38c0aacf717de571764f89dd0db4bd4712207317afaff3" Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.741270 4844 scope.go:117] "RemoveContainer" containerID="2a63e4b6deeddb1d4a3d6375e3b2685ec5ac56c9a83679dab4e4e97bd7bb9fdf" Jan 26 14:25:35 crc kubenswrapper[4844]: E0126 14:25:35.741754 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a63e4b6deeddb1d4a3d6375e3b2685ec5ac56c9a83679dab4e4e97bd7bb9fdf\": container with ID starting with 2a63e4b6deeddb1d4a3d6375e3b2685ec5ac56c9a83679dab4e4e97bd7bb9fdf not found: ID does not exist" containerID="2a63e4b6deeddb1d4a3d6375e3b2685ec5ac56c9a83679dab4e4e97bd7bb9fdf" Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.741785 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a63e4b6deeddb1d4a3d6375e3b2685ec5ac56c9a83679dab4e4e97bd7bb9fdf"} err="failed to get container status \"2a63e4b6deeddb1d4a3d6375e3b2685ec5ac56c9a83679dab4e4e97bd7bb9fdf\": rpc error: code = NotFound desc = could not find container \"2a63e4b6deeddb1d4a3d6375e3b2685ec5ac56c9a83679dab4e4e97bd7bb9fdf\": container with ID starting with 2a63e4b6deeddb1d4a3d6375e3b2685ec5ac56c9a83679dab4e4e97bd7bb9fdf not found: ID does not exist" Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.741809 4844 scope.go:117] "RemoveContainer" containerID="c55609c7667662ab58faca1904a727bff27c0d79a0f1bf8573b2fdd8d28f0a48" Jan 26 14:25:35 crc kubenswrapper[4844]: E0126 14:25:35.742153 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c55609c7667662ab58faca1904a727bff27c0d79a0f1bf8573b2fdd8d28f0a48\": container with ID starting with c55609c7667662ab58faca1904a727bff27c0d79a0f1bf8573b2fdd8d28f0a48 not found: ID does not exist" containerID="c55609c7667662ab58faca1904a727bff27c0d79a0f1bf8573b2fdd8d28f0a48" Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.742179 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c55609c7667662ab58faca1904a727bff27c0d79a0f1bf8573b2fdd8d28f0a48"} err="failed to get container status \"c55609c7667662ab58faca1904a727bff27c0d79a0f1bf8573b2fdd8d28f0a48\": rpc error: code = NotFound desc = could not find container \"c55609c7667662ab58faca1904a727bff27c0d79a0f1bf8573b2fdd8d28f0a48\": container with ID starting with c55609c7667662ab58faca1904a727bff27c0d79a0f1bf8573b2fdd8d28f0a48 not found: ID does not exist" Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.742197 4844 scope.go:117] "RemoveContainer" containerID="e169d785ab122ff49a38c0aacf717de571764f89dd0db4bd4712207317afaff3" Jan 26 14:25:35 crc kubenswrapper[4844]: E0126 14:25:35.742491 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e169d785ab122ff49a38c0aacf717de571764f89dd0db4bd4712207317afaff3\": container with ID starting with e169d785ab122ff49a38c0aacf717de571764f89dd0db4bd4712207317afaff3 not found: ID does not exist" containerID="e169d785ab122ff49a38c0aacf717de571764f89dd0db4bd4712207317afaff3" Jan 26 14:25:35 crc kubenswrapper[4844]: I0126 14:25:35.742527 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e169d785ab122ff49a38c0aacf717de571764f89dd0db4bd4712207317afaff3"} err="failed to get container status \"e169d785ab122ff49a38c0aacf717de571764f89dd0db4bd4712207317afaff3\": rpc error: code = NotFound desc = could not find container \"e169d785ab122ff49a38c0aacf717de571764f89dd0db4bd4712207317afaff3\": container with ID starting with e169d785ab122ff49a38c0aacf717de571764f89dd0db4bd4712207317afaff3 not found: ID does not exist" Jan 26 14:25:37 crc kubenswrapper[4844]: I0126 14:25:37.387207 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ef12174-dab3-42ad-8a0a-9982d70f4f62" path="/var/lib/kubelet/pods/1ef12174-dab3-42ad-8a0a-9982d70f4f62/volumes" Jan 26 14:26:06 crc kubenswrapper[4844]: I0126 14:26:06.365196 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:26:06 crc kubenswrapper[4844]: I0126 14:26:06.365914 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:26:36 crc kubenswrapper[4844]: I0126 14:26:36.364424 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:26:36 crc kubenswrapper[4844]: I0126 14:26:36.365027 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:26:41 crc kubenswrapper[4844]: E0126 14:26:41.069563 4844 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.142:57662->38.102.83.142:35401: write tcp 38.102.83.142:57662->38.102.83.142:35401: write: broken pipe Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.090541 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gdf4v"] Jan 26 14:26:57 crc kubenswrapper[4844]: E0126 14:26:57.091532 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052d223b-c8e1-4303-b3dd-4856f68f9ee1" containerName="registry-server" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.091549 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="052d223b-c8e1-4303-b3dd-4856f68f9ee1" containerName="registry-server" Jan 26 14:26:57 crc kubenswrapper[4844]: E0126 14:26:57.091577 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052d223b-c8e1-4303-b3dd-4856f68f9ee1" containerName="extract-utilities" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.091585 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="052d223b-c8e1-4303-b3dd-4856f68f9ee1" containerName="extract-utilities" Jan 26 14:26:57 crc kubenswrapper[4844]: E0126 14:26:57.096473 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef12174-dab3-42ad-8a0a-9982d70f4f62" containerName="extract-utilities" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.096502 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef12174-dab3-42ad-8a0a-9982d70f4f62" containerName="extract-utilities" Jan 26 14:26:57 crc kubenswrapper[4844]: E0126 14:26:57.096521 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052d223b-c8e1-4303-b3dd-4856f68f9ee1" containerName="extract-content" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.096528 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="052d223b-c8e1-4303-b3dd-4856f68f9ee1" containerName="extract-content" Jan 26 14:26:57 crc kubenswrapper[4844]: E0126 14:26:57.096558 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef12174-dab3-42ad-8a0a-9982d70f4f62" containerName="registry-server" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.096564 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef12174-dab3-42ad-8a0a-9982d70f4f62" containerName="registry-server" Jan 26 14:26:57 crc kubenswrapper[4844]: E0126 14:26:57.096637 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef12174-dab3-42ad-8a0a-9982d70f4f62" containerName="extract-content" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.096651 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef12174-dab3-42ad-8a0a-9982d70f4f62" containerName="extract-content" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.097060 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="052d223b-c8e1-4303-b3dd-4856f68f9ee1" containerName="registry-server" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.097088 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ef12174-dab3-42ad-8a0a-9982d70f4f62" containerName="registry-server" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.098654 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gdf4v" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.118397 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gdf4v"] Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.244280 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c51494d8-69db-436d-b570-25ec474d86bf-catalog-content\") pod \"redhat-operators-gdf4v\" (UID: \"c51494d8-69db-436d-b570-25ec474d86bf\") " pod="openshift-marketplace/redhat-operators-gdf4v" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.244389 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-872wd\" (UniqueName: \"kubernetes.io/projected/c51494d8-69db-436d-b570-25ec474d86bf-kube-api-access-872wd\") pod \"redhat-operators-gdf4v\" (UID: \"c51494d8-69db-436d-b570-25ec474d86bf\") " pod="openshift-marketplace/redhat-operators-gdf4v" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.244418 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c51494d8-69db-436d-b570-25ec474d86bf-utilities\") pod \"redhat-operators-gdf4v\" (UID: \"c51494d8-69db-436d-b570-25ec474d86bf\") " pod="openshift-marketplace/redhat-operators-gdf4v" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.347027 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-872wd\" (UniqueName: \"kubernetes.io/projected/c51494d8-69db-436d-b570-25ec474d86bf-kube-api-access-872wd\") pod \"redhat-operators-gdf4v\" (UID: \"c51494d8-69db-436d-b570-25ec474d86bf\") " pod="openshift-marketplace/redhat-operators-gdf4v" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.347096 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c51494d8-69db-436d-b570-25ec474d86bf-utilities\") pod \"redhat-operators-gdf4v\" (UID: \"c51494d8-69db-436d-b570-25ec474d86bf\") " pod="openshift-marketplace/redhat-operators-gdf4v" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.347260 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c51494d8-69db-436d-b570-25ec474d86bf-catalog-content\") pod \"redhat-operators-gdf4v\" (UID: \"c51494d8-69db-436d-b570-25ec474d86bf\") " pod="openshift-marketplace/redhat-operators-gdf4v" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.347820 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c51494d8-69db-436d-b570-25ec474d86bf-catalog-content\") pod \"redhat-operators-gdf4v\" (UID: \"c51494d8-69db-436d-b570-25ec474d86bf\") " pod="openshift-marketplace/redhat-operators-gdf4v" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.348079 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c51494d8-69db-436d-b570-25ec474d86bf-utilities\") pod \"redhat-operators-gdf4v\" (UID: \"c51494d8-69db-436d-b570-25ec474d86bf\") " pod="openshift-marketplace/redhat-operators-gdf4v" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.391548 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-872wd\" (UniqueName: \"kubernetes.io/projected/c51494d8-69db-436d-b570-25ec474d86bf-kube-api-access-872wd\") pod \"redhat-operators-gdf4v\" (UID: \"c51494d8-69db-436d-b570-25ec474d86bf\") " pod="openshift-marketplace/redhat-operators-gdf4v" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.434813 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gdf4v" Jan 26 14:26:57 crc kubenswrapper[4844]: I0126 14:26:57.989891 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gdf4v"] Jan 26 14:26:58 crc kubenswrapper[4844]: I0126 14:26:58.822161 4844 generic.go:334] "Generic (PLEG): container finished" podID="c51494d8-69db-436d-b570-25ec474d86bf" containerID="a444336c08001890c9467ab675bf6d96fcbe51bb0c1b70db78279a1666ebb0f5" exitCode=0 Jan 26 14:26:58 crc kubenswrapper[4844]: I0126 14:26:58.822250 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdf4v" event={"ID":"c51494d8-69db-436d-b570-25ec474d86bf","Type":"ContainerDied","Data":"a444336c08001890c9467ab675bf6d96fcbe51bb0c1b70db78279a1666ebb0f5"} Jan 26 14:26:58 crc kubenswrapper[4844]: I0126 14:26:58.822470 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdf4v" event={"ID":"c51494d8-69db-436d-b570-25ec474d86bf","Type":"ContainerStarted","Data":"c60d7914848c35b8a2bfcbc1fe4f5856e34f5b48bc3327d18320341d130b49c7"} Jan 26 14:27:00 crc kubenswrapper[4844]: I0126 14:27:00.841826 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdf4v" event={"ID":"c51494d8-69db-436d-b570-25ec474d86bf","Type":"ContainerStarted","Data":"1191d0dfd3e9ca96b1bbbd9ccf379a8d9f593902198a745a8914e421f1b7dd43"} Jan 26 14:27:03 crc kubenswrapper[4844]: I0126 14:27:03.875751 4844 generic.go:334] "Generic (PLEG): container finished" podID="c51494d8-69db-436d-b570-25ec474d86bf" containerID="1191d0dfd3e9ca96b1bbbd9ccf379a8d9f593902198a745a8914e421f1b7dd43" exitCode=0 Jan 26 14:27:03 crc kubenswrapper[4844]: I0126 14:27:03.875867 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdf4v" event={"ID":"c51494d8-69db-436d-b570-25ec474d86bf","Type":"ContainerDied","Data":"1191d0dfd3e9ca96b1bbbd9ccf379a8d9f593902198a745a8914e421f1b7dd43"} Jan 26 14:27:05 crc kubenswrapper[4844]: I0126 14:27:05.896567 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdf4v" event={"ID":"c51494d8-69db-436d-b570-25ec474d86bf","Type":"ContainerStarted","Data":"9b64043d3d0a117688202a2dd93e09ac74dd48878e52dd3b56d8e30d64187bbd"} Jan 26 14:27:05 crc kubenswrapper[4844]: I0126 14:27:05.935115 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gdf4v" podStartSLOduration=2.587334959 podStartE2EDuration="8.935082765s" podCreationTimestamp="2026-01-26 14:26:57 +0000 UTC" firstStartedPulling="2026-01-26 14:26:58.823789612 +0000 UTC m=+6195.757157214" lastFinishedPulling="2026-01-26 14:27:05.171537408 +0000 UTC m=+6202.104905020" observedRunningTime="2026-01-26 14:27:05.919162718 +0000 UTC m=+6202.852530350" watchObservedRunningTime="2026-01-26 14:27:05.935082765 +0000 UTC m=+6202.868450397" Jan 26 14:27:06 crc kubenswrapper[4844]: I0126 14:27:06.365327 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:27:06 crc kubenswrapper[4844]: I0126 14:27:06.365412 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:27:06 crc kubenswrapper[4844]: I0126 14:27:06.365473 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 14:27:06 crc kubenswrapper[4844]: I0126 14:27:06.366367 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5f8eeb5cfa99d5ce0f9d0308a88bd5f39ff9898b65fecb6afb80daade636480f"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:27:06 crc kubenswrapper[4844]: I0126 14:27:06.366437 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://5f8eeb5cfa99d5ce0f9d0308a88bd5f39ff9898b65fecb6afb80daade636480f" gracePeriod=600 Jan 26 14:27:06 crc kubenswrapper[4844]: I0126 14:27:06.918938 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="5f8eeb5cfa99d5ce0f9d0308a88bd5f39ff9898b65fecb6afb80daade636480f" exitCode=0 Jan 26 14:27:06 crc kubenswrapper[4844]: I0126 14:27:06.919010 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"5f8eeb5cfa99d5ce0f9d0308a88bd5f39ff9898b65fecb6afb80daade636480f"} Jan 26 14:27:06 crc kubenswrapper[4844]: I0126 14:27:06.919489 4844 scope.go:117] "RemoveContainer" containerID="948482ceaf80a246ed76115843be1b6302beb191713cbddd64571580f4f215d8" Jan 26 14:27:07 crc kubenswrapper[4844]: I0126 14:27:07.435741 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gdf4v" Jan 26 14:27:07 crc kubenswrapper[4844]: I0126 14:27:07.437047 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gdf4v" Jan 26 14:27:07 crc kubenswrapper[4844]: I0126 14:27:07.931058 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74"} Jan 26 14:27:08 crc kubenswrapper[4844]: I0126 14:27:08.493893 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gdf4v" podUID="c51494d8-69db-436d-b570-25ec474d86bf" containerName="registry-server" probeResult="failure" output=< Jan 26 14:27:08 crc kubenswrapper[4844]: timeout: failed to connect service ":50051" within 1s Jan 26 14:27:08 crc kubenswrapper[4844]: > Jan 26 14:27:17 crc kubenswrapper[4844]: I0126 14:27:17.518766 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gdf4v" Jan 26 14:27:17 crc kubenswrapper[4844]: I0126 14:27:17.585697 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gdf4v" Jan 26 14:27:17 crc kubenswrapper[4844]: I0126 14:27:17.776934 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gdf4v"] Jan 26 14:27:19 crc kubenswrapper[4844]: I0126 14:27:19.040184 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gdf4v" podUID="c51494d8-69db-436d-b570-25ec474d86bf" containerName="registry-server" containerID="cri-o://9b64043d3d0a117688202a2dd93e09ac74dd48878e52dd3b56d8e30d64187bbd" gracePeriod=2 Jan 26 14:27:19 crc kubenswrapper[4844]: I0126 14:27:19.567569 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gdf4v" Jan 26 14:27:19 crc kubenswrapper[4844]: I0126 14:27:19.642696 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c51494d8-69db-436d-b570-25ec474d86bf-catalog-content\") pod \"c51494d8-69db-436d-b570-25ec474d86bf\" (UID: \"c51494d8-69db-436d-b570-25ec474d86bf\") " Jan 26 14:27:19 crc kubenswrapper[4844]: I0126 14:27:19.642981 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c51494d8-69db-436d-b570-25ec474d86bf-utilities\") pod \"c51494d8-69db-436d-b570-25ec474d86bf\" (UID: \"c51494d8-69db-436d-b570-25ec474d86bf\") " Jan 26 14:27:19 crc kubenswrapper[4844]: I0126 14:27:19.643058 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-872wd\" (UniqueName: \"kubernetes.io/projected/c51494d8-69db-436d-b570-25ec474d86bf-kube-api-access-872wd\") pod \"c51494d8-69db-436d-b570-25ec474d86bf\" (UID: \"c51494d8-69db-436d-b570-25ec474d86bf\") " Jan 26 14:27:19 crc kubenswrapper[4844]: I0126 14:27:19.643756 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c51494d8-69db-436d-b570-25ec474d86bf-utilities" (OuterVolumeSpecName: "utilities") pod "c51494d8-69db-436d-b570-25ec474d86bf" (UID: "c51494d8-69db-436d-b570-25ec474d86bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:27:19 crc kubenswrapper[4844]: I0126 14:27:19.650891 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c51494d8-69db-436d-b570-25ec474d86bf-kube-api-access-872wd" (OuterVolumeSpecName: "kube-api-access-872wd") pod "c51494d8-69db-436d-b570-25ec474d86bf" (UID: "c51494d8-69db-436d-b570-25ec474d86bf"). InnerVolumeSpecName "kube-api-access-872wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:27:19 crc kubenswrapper[4844]: I0126 14:27:19.746224 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c51494d8-69db-436d-b570-25ec474d86bf-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:19 crc kubenswrapper[4844]: I0126 14:27:19.746257 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-872wd\" (UniqueName: \"kubernetes.io/projected/c51494d8-69db-436d-b570-25ec474d86bf-kube-api-access-872wd\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:19 crc kubenswrapper[4844]: I0126 14:27:19.770180 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c51494d8-69db-436d-b570-25ec474d86bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c51494d8-69db-436d-b570-25ec474d86bf" (UID: "c51494d8-69db-436d-b570-25ec474d86bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:27:19 crc kubenswrapper[4844]: I0126 14:27:19.848255 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c51494d8-69db-436d-b570-25ec474d86bf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:20 crc kubenswrapper[4844]: I0126 14:27:20.050759 4844 generic.go:334] "Generic (PLEG): container finished" podID="c51494d8-69db-436d-b570-25ec474d86bf" containerID="9b64043d3d0a117688202a2dd93e09ac74dd48878e52dd3b56d8e30d64187bbd" exitCode=0 Jan 26 14:27:20 crc kubenswrapper[4844]: I0126 14:27:20.050827 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdf4v" event={"ID":"c51494d8-69db-436d-b570-25ec474d86bf","Type":"ContainerDied","Data":"9b64043d3d0a117688202a2dd93e09ac74dd48878e52dd3b56d8e30d64187bbd"} Jan 26 14:27:20 crc kubenswrapper[4844]: I0126 14:27:20.050846 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gdf4v" Jan 26 14:27:20 crc kubenswrapper[4844]: I0126 14:27:20.050870 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdf4v" event={"ID":"c51494d8-69db-436d-b570-25ec474d86bf","Type":"ContainerDied","Data":"c60d7914848c35b8a2bfcbc1fe4f5856e34f5b48bc3327d18320341d130b49c7"} Jan 26 14:27:20 crc kubenswrapper[4844]: I0126 14:27:20.050900 4844 scope.go:117] "RemoveContainer" containerID="9b64043d3d0a117688202a2dd93e09ac74dd48878e52dd3b56d8e30d64187bbd" Jan 26 14:27:20 crc kubenswrapper[4844]: I0126 14:27:20.076337 4844 scope.go:117] "RemoveContainer" containerID="1191d0dfd3e9ca96b1bbbd9ccf379a8d9f593902198a745a8914e421f1b7dd43" Jan 26 14:27:20 crc kubenswrapper[4844]: I0126 14:27:20.117355 4844 scope.go:117] "RemoveContainer" containerID="a444336c08001890c9467ab675bf6d96fcbe51bb0c1b70db78279a1666ebb0f5" Jan 26 14:27:20 crc kubenswrapper[4844]: I0126 14:27:20.118644 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gdf4v"] Jan 26 14:27:20 crc kubenswrapper[4844]: I0126 14:27:20.132806 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gdf4v"] Jan 26 14:27:20 crc kubenswrapper[4844]: I0126 14:27:20.204784 4844 scope.go:117] "RemoveContainer" containerID="9b64043d3d0a117688202a2dd93e09ac74dd48878e52dd3b56d8e30d64187bbd" Jan 26 14:27:20 crc kubenswrapper[4844]: E0126 14:27:20.205439 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b64043d3d0a117688202a2dd93e09ac74dd48878e52dd3b56d8e30d64187bbd\": container with ID starting with 9b64043d3d0a117688202a2dd93e09ac74dd48878e52dd3b56d8e30d64187bbd not found: ID does not exist" containerID="9b64043d3d0a117688202a2dd93e09ac74dd48878e52dd3b56d8e30d64187bbd" Jan 26 14:27:20 crc kubenswrapper[4844]: I0126 14:27:20.205481 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b64043d3d0a117688202a2dd93e09ac74dd48878e52dd3b56d8e30d64187bbd"} err="failed to get container status \"9b64043d3d0a117688202a2dd93e09ac74dd48878e52dd3b56d8e30d64187bbd\": rpc error: code = NotFound desc = could not find container \"9b64043d3d0a117688202a2dd93e09ac74dd48878e52dd3b56d8e30d64187bbd\": container with ID starting with 9b64043d3d0a117688202a2dd93e09ac74dd48878e52dd3b56d8e30d64187bbd not found: ID does not exist" Jan 26 14:27:20 crc kubenswrapper[4844]: I0126 14:27:20.205512 4844 scope.go:117] "RemoveContainer" containerID="1191d0dfd3e9ca96b1bbbd9ccf379a8d9f593902198a745a8914e421f1b7dd43" Jan 26 14:27:20 crc kubenswrapper[4844]: E0126 14:27:20.206026 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1191d0dfd3e9ca96b1bbbd9ccf379a8d9f593902198a745a8914e421f1b7dd43\": container with ID starting with 1191d0dfd3e9ca96b1bbbd9ccf379a8d9f593902198a745a8914e421f1b7dd43 not found: ID does not exist" containerID="1191d0dfd3e9ca96b1bbbd9ccf379a8d9f593902198a745a8914e421f1b7dd43" Jan 26 14:27:20 crc kubenswrapper[4844]: I0126 14:27:20.206062 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1191d0dfd3e9ca96b1bbbd9ccf379a8d9f593902198a745a8914e421f1b7dd43"} err="failed to get container status \"1191d0dfd3e9ca96b1bbbd9ccf379a8d9f593902198a745a8914e421f1b7dd43\": rpc error: code = NotFound desc = could not find container \"1191d0dfd3e9ca96b1bbbd9ccf379a8d9f593902198a745a8914e421f1b7dd43\": container with ID starting with 1191d0dfd3e9ca96b1bbbd9ccf379a8d9f593902198a745a8914e421f1b7dd43 not found: ID does not exist" Jan 26 14:27:20 crc kubenswrapper[4844]: I0126 14:27:20.206085 4844 scope.go:117] "RemoveContainer" containerID="a444336c08001890c9467ab675bf6d96fcbe51bb0c1b70db78279a1666ebb0f5" Jan 26 14:27:20 crc kubenswrapper[4844]: E0126 14:27:20.206563 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a444336c08001890c9467ab675bf6d96fcbe51bb0c1b70db78279a1666ebb0f5\": container with ID starting with a444336c08001890c9467ab675bf6d96fcbe51bb0c1b70db78279a1666ebb0f5 not found: ID does not exist" containerID="a444336c08001890c9467ab675bf6d96fcbe51bb0c1b70db78279a1666ebb0f5" Jan 26 14:27:20 crc kubenswrapper[4844]: I0126 14:27:20.206620 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a444336c08001890c9467ab675bf6d96fcbe51bb0c1b70db78279a1666ebb0f5"} err="failed to get container status \"a444336c08001890c9467ab675bf6d96fcbe51bb0c1b70db78279a1666ebb0f5\": rpc error: code = NotFound desc = could not find container \"a444336c08001890c9467ab675bf6d96fcbe51bb0c1b70db78279a1666ebb0f5\": container with ID starting with a444336c08001890c9467ab675bf6d96fcbe51bb0c1b70db78279a1666ebb0f5 not found: ID does not exist" Jan 26 14:27:21 crc kubenswrapper[4844]: I0126 14:27:21.330899 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c51494d8-69db-436d-b570-25ec474d86bf" path="/var/lib/kubelet/pods/c51494d8-69db-436d-b570-25ec474d86bf/volumes" Jan 26 14:29:36 crc kubenswrapper[4844]: I0126 14:29:36.364740 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:29:36 crc kubenswrapper[4844]: I0126 14:29:36.366403 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.157650 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r"] Jan 26 14:30:00 crc kubenswrapper[4844]: E0126 14:30:00.158660 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c51494d8-69db-436d-b570-25ec474d86bf" containerName="extract-content" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.158679 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51494d8-69db-436d-b570-25ec474d86bf" containerName="extract-content" Jan 26 14:30:00 crc kubenswrapper[4844]: E0126 14:30:00.158692 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c51494d8-69db-436d-b570-25ec474d86bf" containerName="extract-utilities" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.158700 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51494d8-69db-436d-b570-25ec474d86bf" containerName="extract-utilities" Jan 26 14:30:00 crc kubenswrapper[4844]: E0126 14:30:00.158739 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c51494d8-69db-436d-b570-25ec474d86bf" containerName="registry-server" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.158748 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51494d8-69db-436d-b570-25ec474d86bf" containerName="registry-server" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.158987 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="c51494d8-69db-436d-b570-25ec474d86bf" containerName="registry-server" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.159873 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.162815 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.163308 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.183742 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r"] Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.296069 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce6264dd-fcae-4c45-be78-b7aaf8e2d713-config-volume\") pod \"collect-profiles-29490630-nfd2r\" (UID: \"ce6264dd-fcae-4c45-be78-b7aaf8e2d713\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.296159 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9d7d\" (UniqueName: \"kubernetes.io/projected/ce6264dd-fcae-4c45-be78-b7aaf8e2d713-kube-api-access-g9d7d\") pod \"collect-profiles-29490630-nfd2r\" (UID: \"ce6264dd-fcae-4c45-be78-b7aaf8e2d713\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.296223 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce6264dd-fcae-4c45-be78-b7aaf8e2d713-secret-volume\") pod \"collect-profiles-29490630-nfd2r\" (UID: \"ce6264dd-fcae-4c45-be78-b7aaf8e2d713\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.397792 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9d7d\" (UniqueName: \"kubernetes.io/projected/ce6264dd-fcae-4c45-be78-b7aaf8e2d713-kube-api-access-g9d7d\") pod \"collect-profiles-29490630-nfd2r\" (UID: \"ce6264dd-fcae-4c45-be78-b7aaf8e2d713\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.397928 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce6264dd-fcae-4c45-be78-b7aaf8e2d713-secret-volume\") pod \"collect-profiles-29490630-nfd2r\" (UID: \"ce6264dd-fcae-4c45-be78-b7aaf8e2d713\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.398119 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce6264dd-fcae-4c45-be78-b7aaf8e2d713-config-volume\") pod \"collect-profiles-29490630-nfd2r\" (UID: \"ce6264dd-fcae-4c45-be78-b7aaf8e2d713\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.399186 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce6264dd-fcae-4c45-be78-b7aaf8e2d713-config-volume\") pod \"collect-profiles-29490630-nfd2r\" (UID: \"ce6264dd-fcae-4c45-be78-b7aaf8e2d713\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.421137 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce6264dd-fcae-4c45-be78-b7aaf8e2d713-secret-volume\") pod \"collect-profiles-29490630-nfd2r\" (UID: \"ce6264dd-fcae-4c45-be78-b7aaf8e2d713\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.436421 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9d7d\" (UniqueName: \"kubernetes.io/projected/ce6264dd-fcae-4c45-be78-b7aaf8e2d713-kube-api-access-g9d7d\") pod \"collect-profiles-29490630-nfd2r\" (UID: \"ce6264dd-fcae-4c45-be78-b7aaf8e2d713\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.502041 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r" Jan 26 14:30:00 crc kubenswrapper[4844]: I0126 14:30:00.980417 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r"] Jan 26 14:30:01 crc kubenswrapper[4844]: I0126 14:30:01.523080 4844 generic.go:334] "Generic (PLEG): container finished" podID="ce6264dd-fcae-4c45-be78-b7aaf8e2d713" containerID="383f4617aec00c245bd5a3fbf6fe7aef099e9157c7f877c37c79e9c5606fb9e6" exitCode=0 Jan 26 14:30:01 crc kubenswrapper[4844]: I0126 14:30:01.523369 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r" event={"ID":"ce6264dd-fcae-4c45-be78-b7aaf8e2d713","Type":"ContainerDied","Data":"383f4617aec00c245bd5a3fbf6fe7aef099e9157c7f877c37c79e9c5606fb9e6"} Jan 26 14:30:01 crc kubenswrapper[4844]: I0126 14:30:01.523400 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r" event={"ID":"ce6264dd-fcae-4c45-be78-b7aaf8e2d713","Type":"ContainerStarted","Data":"1916b6e603dffcbc3062bc1553f064e0ae8ea99b1883c1b3cf9ef259a13cf9fa"} Jan 26 14:30:02 crc kubenswrapper[4844]: I0126 14:30:02.964877 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r" Jan 26 14:30:03 crc kubenswrapper[4844]: I0126 14:30:03.071031 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce6264dd-fcae-4c45-be78-b7aaf8e2d713-config-volume\") pod \"ce6264dd-fcae-4c45-be78-b7aaf8e2d713\" (UID: \"ce6264dd-fcae-4c45-be78-b7aaf8e2d713\") " Jan 26 14:30:03 crc kubenswrapper[4844]: I0126 14:30:03.071078 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce6264dd-fcae-4c45-be78-b7aaf8e2d713-secret-volume\") pod \"ce6264dd-fcae-4c45-be78-b7aaf8e2d713\" (UID: \"ce6264dd-fcae-4c45-be78-b7aaf8e2d713\") " Jan 26 14:30:03 crc kubenswrapper[4844]: I0126 14:30:03.071295 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9d7d\" (UniqueName: \"kubernetes.io/projected/ce6264dd-fcae-4c45-be78-b7aaf8e2d713-kube-api-access-g9d7d\") pod \"ce6264dd-fcae-4c45-be78-b7aaf8e2d713\" (UID: \"ce6264dd-fcae-4c45-be78-b7aaf8e2d713\") " Jan 26 14:30:03 crc kubenswrapper[4844]: I0126 14:30:03.071704 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce6264dd-fcae-4c45-be78-b7aaf8e2d713-config-volume" (OuterVolumeSpecName: "config-volume") pod "ce6264dd-fcae-4c45-be78-b7aaf8e2d713" (UID: "ce6264dd-fcae-4c45-be78-b7aaf8e2d713"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:03 crc kubenswrapper[4844]: I0126 14:30:03.077248 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce6264dd-fcae-4c45-be78-b7aaf8e2d713-kube-api-access-g9d7d" (OuterVolumeSpecName: "kube-api-access-g9d7d") pod "ce6264dd-fcae-4c45-be78-b7aaf8e2d713" (UID: "ce6264dd-fcae-4c45-be78-b7aaf8e2d713"). InnerVolumeSpecName "kube-api-access-g9d7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:03 crc kubenswrapper[4844]: I0126 14:30:03.077420 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce6264dd-fcae-4c45-be78-b7aaf8e2d713-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ce6264dd-fcae-4c45-be78-b7aaf8e2d713" (UID: "ce6264dd-fcae-4c45-be78-b7aaf8e2d713"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:03 crc kubenswrapper[4844]: I0126 14:30:03.174057 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9d7d\" (UniqueName: \"kubernetes.io/projected/ce6264dd-fcae-4c45-be78-b7aaf8e2d713-kube-api-access-g9d7d\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:03 crc kubenswrapper[4844]: I0126 14:30:03.174090 4844 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce6264dd-fcae-4c45-be78-b7aaf8e2d713-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:03 crc kubenswrapper[4844]: I0126 14:30:03.174099 4844 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce6264dd-fcae-4c45-be78-b7aaf8e2d713-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:03 crc kubenswrapper[4844]: I0126 14:30:03.542453 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r" event={"ID":"ce6264dd-fcae-4c45-be78-b7aaf8e2d713","Type":"ContainerDied","Data":"1916b6e603dffcbc3062bc1553f064e0ae8ea99b1883c1b3cf9ef259a13cf9fa"} Jan 26 14:30:03 crc kubenswrapper[4844]: I0126 14:30:03.542506 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-nfd2r" Jan 26 14:30:03 crc kubenswrapper[4844]: I0126 14:30:03.542507 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1916b6e603dffcbc3062bc1553f064e0ae8ea99b1883c1b3cf9ef259a13cf9fa" Jan 26 14:30:04 crc kubenswrapper[4844]: I0126 14:30:04.056277 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz"] Jan 26 14:30:04 crc kubenswrapper[4844]: I0126 14:30:04.068108 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490585-c9xnz"] Jan 26 14:30:05 crc kubenswrapper[4844]: I0126 14:30:05.324523 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adf537bf-b6e3-434a-9974-0bdb96ad52ca" path="/var/lib/kubelet/pods/adf537bf-b6e3-434a-9974-0bdb96ad52ca/volumes" Jan 26 14:30:06 crc kubenswrapper[4844]: I0126 14:30:06.364554 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:30:06 crc kubenswrapper[4844]: I0126 14:30:06.364648 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:30:36 crc kubenswrapper[4844]: I0126 14:30:36.365229 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:30:36 crc kubenswrapper[4844]: I0126 14:30:36.365779 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:30:36 crc kubenswrapper[4844]: I0126 14:30:36.365826 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 14:30:36 crc kubenswrapper[4844]: I0126 14:30:36.366898 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:30:36 crc kubenswrapper[4844]: I0126 14:30:36.367001 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" gracePeriod=600 Jan 26 14:30:36 crc kubenswrapper[4844]: E0126 14:30:36.523659 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:30:36 crc kubenswrapper[4844]: I0126 14:30:36.934390 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" exitCode=0 Jan 26 14:30:36 crc kubenswrapper[4844]: I0126 14:30:36.934458 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74"} Jan 26 14:30:36 crc kubenswrapper[4844]: I0126 14:30:36.934527 4844 scope.go:117] "RemoveContainer" containerID="5f8eeb5cfa99d5ce0f9d0308a88bd5f39ff9898b65fecb6afb80daade636480f" Jan 26 14:30:36 crc kubenswrapper[4844]: I0126 14:30:36.935519 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:30:36 crc kubenswrapper[4844]: E0126 14:30:36.936072 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:30:47 crc kubenswrapper[4844]: I0126 14:30:47.786556 4844 scope.go:117] "RemoveContainer" containerID="6703894b7ec317767939fa078e9e3a23439fc711550592254bb80e1104d38d36" Jan 26 14:30:50 crc kubenswrapper[4844]: I0126 14:30:50.313017 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:30:50 crc kubenswrapper[4844]: E0126 14:30:50.313726 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:31:04 crc kubenswrapper[4844]: I0126 14:31:04.313555 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:31:04 crc kubenswrapper[4844]: E0126 14:31:04.314378 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:31:08 crc kubenswrapper[4844]: I0126 14:31:08.448518 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-42bjx"] Jan 26 14:31:08 crc kubenswrapper[4844]: E0126 14:31:08.449695 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6264dd-fcae-4c45-be78-b7aaf8e2d713" containerName="collect-profiles" Jan 26 14:31:08 crc kubenswrapper[4844]: I0126 14:31:08.449716 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6264dd-fcae-4c45-be78-b7aaf8e2d713" containerName="collect-profiles" Jan 26 14:31:08 crc kubenswrapper[4844]: I0126 14:31:08.450045 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce6264dd-fcae-4c45-be78-b7aaf8e2d713" containerName="collect-profiles" Jan 26 14:31:08 crc kubenswrapper[4844]: I0126 14:31:08.452649 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-42bjx" Jan 26 14:31:08 crc kubenswrapper[4844]: I0126 14:31:08.476631 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-42bjx"] Jan 26 14:31:08 crc kubenswrapper[4844]: I0126 14:31:08.492926 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17-catalog-content\") pod \"redhat-marketplace-42bjx\" (UID: \"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17\") " pod="openshift-marketplace/redhat-marketplace-42bjx" Jan 26 14:31:08 crc kubenswrapper[4844]: I0126 14:31:08.493227 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txm66\" (UniqueName: \"kubernetes.io/projected/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17-kube-api-access-txm66\") pod \"redhat-marketplace-42bjx\" (UID: \"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17\") " pod="openshift-marketplace/redhat-marketplace-42bjx" Jan 26 14:31:08 crc kubenswrapper[4844]: I0126 14:31:08.493398 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17-utilities\") pod \"redhat-marketplace-42bjx\" (UID: \"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17\") " pod="openshift-marketplace/redhat-marketplace-42bjx" Jan 26 14:31:08 crc kubenswrapper[4844]: I0126 14:31:08.596747 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17-utilities\") pod \"redhat-marketplace-42bjx\" (UID: \"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17\") " pod="openshift-marketplace/redhat-marketplace-42bjx" Jan 26 14:31:08 crc kubenswrapper[4844]: I0126 14:31:08.596976 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17-catalog-content\") pod \"redhat-marketplace-42bjx\" (UID: \"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17\") " pod="openshift-marketplace/redhat-marketplace-42bjx" Jan 26 14:31:08 crc kubenswrapper[4844]: I0126 14:31:08.597032 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txm66\" (UniqueName: \"kubernetes.io/projected/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17-kube-api-access-txm66\") pod \"redhat-marketplace-42bjx\" (UID: \"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17\") " pod="openshift-marketplace/redhat-marketplace-42bjx" Jan 26 14:31:08 crc kubenswrapper[4844]: I0126 14:31:08.597687 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17-catalog-content\") pod \"redhat-marketplace-42bjx\" (UID: \"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17\") " pod="openshift-marketplace/redhat-marketplace-42bjx" Jan 26 14:31:08 crc kubenswrapper[4844]: I0126 14:31:08.597860 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17-utilities\") pod \"redhat-marketplace-42bjx\" (UID: \"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17\") " pod="openshift-marketplace/redhat-marketplace-42bjx" Jan 26 14:31:08 crc kubenswrapper[4844]: I0126 14:31:08.617817 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txm66\" (UniqueName: \"kubernetes.io/projected/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17-kube-api-access-txm66\") pod \"redhat-marketplace-42bjx\" (UID: \"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17\") " pod="openshift-marketplace/redhat-marketplace-42bjx" Jan 26 14:31:08 crc kubenswrapper[4844]: I0126 14:31:08.777077 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-42bjx" Jan 26 14:31:09 crc kubenswrapper[4844]: I0126 14:31:09.330856 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-42bjx"] Jan 26 14:31:10 crc kubenswrapper[4844]: I0126 14:31:10.278523 4844 generic.go:334] "Generic (PLEG): container finished" podID="63ebdbee-3ad1-4c69-bcef-0d2b073c8b17" containerID="cbfbbb35478244712f58da579210beda4e1cd1f1080e9a276a7b84b7fd66dc06" exitCode=0 Jan 26 14:31:10 crc kubenswrapper[4844]: I0126 14:31:10.278723 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42bjx" event={"ID":"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17","Type":"ContainerDied","Data":"cbfbbb35478244712f58da579210beda4e1cd1f1080e9a276a7b84b7fd66dc06"} Jan 26 14:31:10 crc kubenswrapper[4844]: I0126 14:31:10.278824 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42bjx" event={"ID":"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17","Type":"ContainerStarted","Data":"74629821fb0c51e814309ef456ec140c604a6f8b5d447f700d8b413aa00cfcbe"} Jan 26 14:31:10 crc kubenswrapper[4844]: I0126 14:31:10.280688 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:31:11 crc kubenswrapper[4844]: I0126 14:31:11.291203 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42bjx" event={"ID":"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17","Type":"ContainerStarted","Data":"46b2a10af1a774f6af3819d3980d1801573b4ae6f2a9ef73a4d8c854c79aa3e1"} Jan 26 14:31:12 crc kubenswrapper[4844]: I0126 14:31:12.310046 4844 generic.go:334] "Generic (PLEG): container finished" podID="63ebdbee-3ad1-4c69-bcef-0d2b073c8b17" containerID="46b2a10af1a774f6af3819d3980d1801573b4ae6f2a9ef73a4d8c854c79aa3e1" exitCode=0 Jan 26 14:31:12 crc kubenswrapper[4844]: I0126 14:31:12.310361 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42bjx" event={"ID":"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17","Type":"ContainerDied","Data":"46b2a10af1a774f6af3819d3980d1801573b4ae6f2a9ef73a4d8c854c79aa3e1"} Jan 26 14:31:13 crc kubenswrapper[4844]: I0126 14:31:13.328337 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42bjx" event={"ID":"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17","Type":"ContainerStarted","Data":"35b7300fb675472598b390a039865f6740f85c1d0306e056b603ecb9ebed2df5"} Jan 26 14:31:13 crc kubenswrapper[4844]: I0126 14:31:13.364835 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-42bjx" podStartSLOduration=2.605866616 podStartE2EDuration="5.36481319s" podCreationTimestamp="2026-01-26 14:31:08 +0000 UTC" firstStartedPulling="2026-01-26 14:31:10.280330554 +0000 UTC m=+6447.213698166" lastFinishedPulling="2026-01-26 14:31:13.039277118 +0000 UTC m=+6449.972644740" observedRunningTime="2026-01-26 14:31:13.351354322 +0000 UTC m=+6450.284721934" watchObservedRunningTime="2026-01-26 14:31:13.36481319 +0000 UTC m=+6450.298180812" Jan 26 14:31:18 crc kubenswrapper[4844]: I0126 14:31:18.313722 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:31:18 crc kubenswrapper[4844]: E0126 14:31:18.314398 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:31:18 crc kubenswrapper[4844]: I0126 14:31:18.777329 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-42bjx" Jan 26 14:31:18 crc kubenswrapper[4844]: I0126 14:31:18.778037 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-42bjx" Jan 26 14:31:18 crc kubenswrapper[4844]: I0126 14:31:18.845188 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-42bjx" Jan 26 14:31:19 crc kubenswrapper[4844]: I0126 14:31:19.454398 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-42bjx" Jan 26 14:31:19 crc kubenswrapper[4844]: I0126 14:31:19.535991 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-42bjx"] Jan 26 14:31:21 crc kubenswrapper[4844]: I0126 14:31:21.419996 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-42bjx" podUID="63ebdbee-3ad1-4c69-bcef-0d2b073c8b17" containerName="registry-server" containerID="cri-o://35b7300fb675472598b390a039865f6740f85c1d0306e056b603ecb9ebed2df5" gracePeriod=2 Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.410405 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-42bjx" Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.432061 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17-catalog-content\") pod \"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17\" (UID: \"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17\") " Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.432177 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txm66\" (UniqueName: \"kubernetes.io/projected/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17-kube-api-access-txm66\") pod \"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17\" (UID: \"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17\") " Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.432296 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17-utilities\") pod \"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17\" (UID: \"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17\") " Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.434222 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17-utilities" (OuterVolumeSpecName: "utilities") pod "63ebdbee-3ad1-4c69-bcef-0d2b073c8b17" (UID: "63ebdbee-3ad1-4c69-bcef-0d2b073c8b17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.439112 4844 generic.go:334] "Generic (PLEG): container finished" podID="63ebdbee-3ad1-4c69-bcef-0d2b073c8b17" containerID="35b7300fb675472598b390a039865f6740f85c1d0306e056b603ecb9ebed2df5" exitCode=0 Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.439146 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17-kube-api-access-txm66" (OuterVolumeSpecName: "kube-api-access-txm66") pod "63ebdbee-3ad1-4c69-bcef-0d2b073c8b17" (UID: "63ebdbee-3ad1-4c69-bcef-0d2b073c8b17"). InnerVolumeSpecName "kube-api-access-txm66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.439158 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42bjx" event={"ID":"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17","Type":"ContainerDied","Data":"35b7300fb675472598b390a039865f6740f85c1d0306e056b603ecb9ebed2df5"} Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.439190 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42bjx" event={"ID":"63ebdbee-3ad1-4c69-bcef-0d2b073c8b17","Type":"ContainerDied","Data":"74629821fb0c51e814309ef456ec140c604a6f8b5d447f700d8b413aa00cfcbe"} Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.439220 4844 scope.go:117] "RemoveContainer" containerID="35b7300fb675472598b390a039865f6740f85c1d0306e056b603ecb9ebed2df5" Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.439229 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-42bjx" Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.458816 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63ebdbee-3ad1-4c69-bcef-0d2b073c8b17" (UID: "63ebdbee-3ad1-4c69-bcef-0d2b073c8b17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.500388 4844 scope.go:117] "RemoveContainer" containerID="46b2a10af1a774f6af3819d3980d1801573b4ae6f2a9ef73a4d8c854c79aa3e1" Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.524506 4844 scope.go:117] "RemoveContainer" containerID="cbfbbb35478244712f58da579210beda4e1cd1f1080e9a276a7b84b7fd66dc06" Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.535979 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.536019 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.536034 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txm66\" (UniqueName: \"kubernetes.io/projected/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17-kube-api-access-txm66\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.585220 4844 scope.go:117] "RemoveContainer" containerID="35b7300fb675472598b390a039865f6740f85c1d0306e056b603ecb9ebed2df5" Jan 26 14:31:22 crc kubenswrapper[4844]: E0126 14:31:22.585639 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35b7300fb675472598b390a039865f6740f85c1d0306e056b603ecb9ebed2df5\": container with ID starting with 35b7300fb675472598b390a039865f6740f85c1d0306e056b603ecb9ebed2df5 not found: ID does not exist" containerID="35b7300fb675472598b390a039865f6740f85c1d0306e056b603ecb9ebed2df5" Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.585675 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35b7300fb675472598b390a039865f6740f85c1d0306e056b603ecb9ebed2df5"} err="failed to get container status \"35b7300fb675472598b390a039865f6740f85c1d0306e056b603ecb9ebed2df5\": rpc error: code = NotFound desc = could not find container \"35b7300fb675472598b390a039865f6740f85c1d0306e056b603ecb9ebed2df5\": container with ID starting with 35b7300fb675472598b390a039865f6740f85c1d0306e056b603ecb9ebed2df5 not found: ID does not exist" Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.585696 4844 scope.go:117] "RemoveContainer" containerID="46b2a10af1a774f6af3819d3980d1801573b4ae6f2a9ef73a4d8c854c79aa3e1" Jan 26 14:31:22 crc kubenswrapper[4844]: E0126 14:31:22.585934 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46b2a10af1a774f6af3819d3980d1801573b4ae6f2a9ef73a4d8c854c79aa3e1\": container with ID starting with 46b2a10af1a774f6af3819d3980d1801573b4ae6f2a9ef73a4d8c854c79aa3e1 not found: ID does not exist" containerID="46b2a10af1a774f6af3819d3980d1801573b4ae6f2a9ef73a4d8c854c79aa3e1" Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.585955 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46b2a10af1a774f6af3819d3980d1801573b4ae6f2a9ef73a4d8c854c79aa3e1"} err="failed to get container status \"46b2a10af1a774f6af3819d3980d1801573b4ae6f2a9ef73a4d8c854c79aa3e1\": rpc error: code = NotFound desc = could not find container \"46b2a10af1a774f6af3819d3980d1801573b4ae6f2a9ef73a4d8c854c79aa3e1\": container with ID starting with 46b2a10af1a774f6af3819d3980d1801573b4ae6f2a9ef73a4d8c854c79aa3e1 not found: ID does not exist" Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.585967 4844 scope.go:117] "RemoveContainer" containerID="cbfbbb35478244712f58da579210beda4e1cd1f1080e9a276a7b84b7fd66dc06" Jan 26 14:31:22 crc kubenswrapper[4844]: E0126 14:31:22.586151 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbfbbb35478244712f58da579210beda4e1cd1f1080e9a276a7b84b7fd66dc06\": container with ID starting with cbfbbb35478244712f58da579210beda4e1cd1f1080e9a276a7b84b7fd66dc06 not found: ID does not exist" containerID="cbfbbb35478244712f58da579210beda4e1cd1f1080e9a276a7b84b7fd66dc06" Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.586170 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbfbbb35478244712f58da579210beda4e1cd1f1080e9a276a7b84b7fd66dc06"} err="failed to get container status \"cbfbbb35478244712f58da579210beda4e1cd1f1080e9a276a7b84b7fd66dc06\": rpc error: code = NotFound desc = could not find container \"cbfbbb35478244712f58da579210beda4e1cd1f1080e9a276a7b84b7fd66dc06\": container with ID starting with cbfbbb35478244712f58da579210beda4e1cd1f1080e9a276a7b84b7fd66dc06 not found: ID does not exist" Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.794839 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-42bjx"] Jan 26 14:31:22 crc kubenswrapper[4844]: I0126 14:31:22.811942 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-42bjx"] Jan 26 14:31:23 crc kubenswrapper[4844]: I0126 14:31:23.329966 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63ebdbee-3ad1-4c69-bcef-0d2b073c8b17" path="/var/lib/kubelet/pods/63ebdbee-3ad1-4c69-bcef-0d2b073c8b17/volumes" Jan 26 14:31:32 crc kubenswrapper[4844]: I0126 14:31:32.313768 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:31:32 crc kubenswrapper[4844]: E0126 14:31:32.314420 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:31:44 crc kubenswrapper[4844]: I0126 14:31:44.313683 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:31:44 crc kubenswrapper[4844]: E0126 14:31:44.314714 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:31:58 crc kubenswrapper[4844]: I0126 14:31:58.313539 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:31:58 crc kubenswrapper[4844]: E0126 14:31:58.314480 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:32:12 crc kubenswrapper[4844]: I0126 14:32:12.313794 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:32:12 crc kubenswrapper[4844]: E0126 14:32:12.315222 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:32:25 crc kubenswrapper[4844]: I0126 14:32:25.313500 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:32:25 crc kubenswrapper[4844]: E0126 14:32:25.314496 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:32:36 crc kubenswrapper[4844]: I0126 14:32:36.313815 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:32:36 crc kubenswrapper[4844]: E0126 14:32:36.314538 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:32:51 crc kubenswrapper[4844]: I0126 14:32:51.314322 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:32:51 crc kubenswrapper[4844]: E0126 14:32:51.315457 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:33:05 crc kubenswrapper[4844]: I0126 14:33:05.313191 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:33:05 crc kubenswrapper[4844]: E0126 14:33:05.314241 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:33:19 crc kubenswrapper[4844]: I0126 14:33:19.313544 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:33:19 crc kubenswrapper[4844]: E0126 14:33:19.314671 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:33:31 crc kubenswrapper[4844]: I0126 14:33:31.314011 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:33:31 crc kubenswrapper[4844]: E0126 14:33:31.314975 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:33:41 crc kubenswrapper[4844]: I0126 14:33:39.761331 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="fb03b4d3-5582-4758-a585-5f8e82a306da" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 26 14:33:42 crc kubenswrapper[4844]: I0126 14:33:42.313512 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:33:42 crc kubenswrapper[4844]: E0126 14:33:42.314062 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:33:53 crc kubenswrapper[4844]: I0126 14:33:53.007823 4844 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dgglg" podUID="915eea77-c5eb-4e5c-b9f2-404ba732dac8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 14:33:53 crc kubenswrapper[4844]: I0126 14:33:53.329088 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:33:53 crc kubenswrapper[4844]: E0126 14:33:53.329564 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:33:59 crc kubenswrapper[4844]: I0126 14:33:59.761842 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="fb03b4d3-5582-4758-a585-5f8e82a306da" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 26 14:34:07 crc kubenswrapper[4844]: I0126 14:34:07.313835 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:34:07 crc kubenswrapper[4844]: E0126 14:34:07.314524 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:34:19 crc kubenswrapper[4844]: I0126 14:34:19.313396 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:34:19 crc kubenswrapper[4844]: E0126 14:34:19.314266 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:34:31 crc kubenswrapper[4844]: I0126 14:34:31.313245 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:34:31 crc kubenswrapper[4844]: E0126 14:34:31.314067 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:34:42 crc kubenswrapper[4844]: I0126 14:34:42.313924 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:34:42 crc kubenswrapper[4844]: E0126 14:34:42.314788 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:34:53 crc kubenswrapper[4844]: I0126 14:34:53.325618 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:34:53 crc kubenswrapper[4844]: E0126 14:34:53.326372 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:35:04 crc kubenswrapper[4844]: I0126 14:35:04.312933 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:35:04 crc kubenswrapper[4844]: E0126 14:35:04.314055 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:35:16 crc kubenswrapper[4844]: I0126 14:35:16.313660 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:35:16 crc kubenswrapper[4844]: E0126 14:35:16.314639 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:35:28 crc kubenswrapper[4844]: I0126 14:35:28.314403 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:35:28 crc kubenswrapper[4844]: E0126 14:35:28.315493 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:35:40 crc kubenswrapper[4844]: I0126 14:35:40.313736 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:35:41 crc kubenswrapper[4844]: I0126 14:35:41.210570 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"1b662f3876628db4e3e14d2a4b83b69e591a54d9e073c177db60f5cee583d50b"} Jan 26 14:36:09 crc kubenswrapper[4844]: I0126 14:36:09.912389 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ntgkj"] Jan 26 14:36:09 crc kubenswrapper[4844]: E0126 14:36:09.913541 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ebdbee-3ad1-4c69-bcef-0d2b073c8b17" containerName="registry-server" Jan 26 14:36:09 crc kubenswrapper[4844]: I0126 14:36:09.913558 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ebdbee-3ad1-4c69-bcef-0d2b073c8b17" containerName="registry-server" Jan 26 14:36:09 crc kubenswrapper[4844]: E0126 14:36:09.913615 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ebdbee-3ad1-4c69-bcef-0d2b073c8b17" containerName="extract-content" Jan 26 14:36:09 crc kubenswrapper[4844]: I0126 14:36:09.913625 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ebdbee-3ad1-4c69-bcef-0d2b073c8b17" containerName="extract-content" Jan 26 14:36:09 crc kubenswrapper[4844]: E0126 14:36:09.913644 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ebdbee-3ad1-4c69-bcef-0d2b073c8b17" containerName="extract-utilities" Jan 26 14:36:09 crc kubenswrapper[4844]: I0126 14:36:09.913653 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ebdbee-3ad1-4c69-bcef-0d2b073c8b17" containerName="extract-utilities" Jan 26 14:36:09 crc kubenswrapper[4844]: I0126 14:36:09.913894 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="63ebdbee-3ad1-4c69-bcef-0d2b073c8b17" containerName="registry-server" Jan 26 14:36:09 crc kubenswrapper[4844]: I0126 14:36:09.915733 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntgkj" Jan 26 14:36:09 crc kubenswrapper[4844]: I0126 14:36:09.932650 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ntgkj"] Jan 26 14:36:10 crc kubenswrapper[4844]: I0126 14:36:10.084733 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpc8w\" (UniqueName: \"kubernetes.io/projected/72e8effa-04fc-44ec-8c29-661788db235f-kube-api-access-wpc8w\") pod \"certified-operators-ntgkj\" (UID: \"72e8effa-04fc-44ec-8c29-661788db235f\") " pod="openshift-marketplace/certified-operators-ntgkj" Jan 26 14:36:10 crc kubenswrapper[4844]: I0126 14:36:10.085083 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e8effa-04fc-44ec-8c29-661788db235f-catalog-content\") pod \"certified-operators-ntgkj\" (UID: \"72e8effa-04fc-44ec-8c29-661788db235f\") " pod="openshift-marketplace/certified-operators-ntgkj" Jan 26 14:36:10 crc kubenswrapper[4844]: I0126 14:36:10.085685 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e8effa-04fc-44ec-8c29-661788db235f-utilities\") pod \"certified-operators-ntgkj\" (UID: \"72e8effa-04fc-44ec-8c29-661788db235f\") " pod="openshift-marketplace/certified-operators-ntgkj" Jan 26 14:36:10 crc kubenswrapper[4844]: I0126 14:36:10.187732 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e8effa-04fc-44ec-8c29-661788db235f-catalog-content\") pod \"certified-operators-ntgkj\" (UID: \"72e8effa-04fc-44ec-8c29-661788db235f\") " pod="openshift-marketplace/certified-operators-ntgkj" Jan 26 14:36:10 crc kubenswrapper[4844]: I0126 14:36:10.188250 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e8effa-04fc-44ec-8c29-661788db235f-catalog-content\") pod \"certified-operators-ntgkj\" (UID: \"72e8effa-04fc-44ec-8c29-661788db235f\") " pod="openshift-marketplace/certified-operators-ntgkj" Jan 26 14:36:10 crc kubenswrapper[4844]: I0126 14:36:10.188549 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e8effa-04fc-44ec-8c29-661788db235f-utilities\") pod \"certified-operators-ntgkj\" (UID: \"72e8effa-04fc-44ec-8c29-661788db235f\") " pod="openshift-marketplace/certified-operators-ntgkj" Jan 26 14:36:10 crc kubenswrapper[4844]: I0126 14:36:10.188825 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e8effa-04fc-44ec-8c29-661788db235f-utilities\") pod \"certified-operators-ntgkj\" (UID: \"72e8effa-04fc-44ec-8c29-661788db235f\") " pod="openshift-marketplace/certified-operators-ntgkj" Jan 26 14:36:10 crc kubenswrapper[4844]: I0126 14:36:10.189060 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpc8w\" (UniqueName: \"kubernetes.io/projected/72e8effa-04fc-44ec-8c29-661788db235f-kube-api-access-wpc8w\") pod \"certified-operators-ntgkj\" (UID: \"72e8effa-04fc-44ec-8c29-661788db235f\") " pod="openshift-marketplace/certified-operators-ntgkj" Jan 26 14:36:10 crc kubenswrapper[4844]: I0126 14:36:10.211909 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpc8w\" (UniqueName: \"kubernetes.io/projected/72e8effa-04fc-44ec-8c29-661788db235f-kube-api-access-wpc8w\") pod \"certified-operators-ntgkj\" (UID: \"72e8effa-04fc-44ec-8c29-661788db235f\") " pod="openshift-marketplace/certified-operators-ntgkj" Jan 26 14:36:10 crc kubenswrapper[4844]: I0126 14:36:10.236243 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntgkj" Jan 26 14:36:11 crc kubenswrapper[4844]: I0126 14:36:11.647860 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ntgkj"] Jan 26 14:36:12 crc kubenswrapper[4844]: I0126 14:36:12.322103 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rqpwz"] Jan 26 14:36:12 crc kubenswrapper[4844]: I0126 14:36:12.324708 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqpwz" Jan 26 14:36:12 crc kubenswrapper[4844]: I0126 14:36:12.332112 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rqpwz"] Jan 26 14:36:12 crc kubenswrapper[4844]: I0126 14:36:12.433252 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/555fee01-0d10-4dcb-8604-01869ba6859a-catalog-content\") pod \"community-operators-rqpwz\" (UID: \"555fee01-0d10-4dcb-8604-01869ba6859a\") " pod="openshift-marketplace/community-operators-rqpwz" Jan 26 14:36:12 crc kubenswrapper[4844]: I0126 14:36:12.433368 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/555fee01-0d10-4dcb-8604-01869ba6859a-utilities\") pod \"community-operators-rqpwz\" (UID: \"555fee01-0d10-4dcb-8604-01869ba6859a\") " pod="openshift-marketplace/community-operators-rqpwz" Jan 26 14:36:12 crc kubenswrapper[4844]: I0126 14:36:12.433448 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54ds8\" (UniqueName: \"kubernetes.io/projected/555fee01-0d10-4dcb-8604-01869ba6859a-kube-api-access-54ds8\") pod \"community-operators-rqpwz\" (UID: \"555fee01-0d10-4dcb-8604-01869ba6859a\") " pod="openshift-marketplace/community-operators-rqpwz" Jan 26 14:36:12 crc kubenswrapper[4844]: I0126 14:36:12.520396 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntgkj" event={"ID":"72e8effa-04fc-44ec-8c29-661788db235f","Type":"ContainerStarted","Data":"70f2ddccb9f92e0c415fc7c2c6f65b56bed3aa93ae945af0a66a042e36c24392"} Jan 26 14:36:12 crc kubenswrapper[4844]: I0126 14:36:12.535789 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/555fee01-0d10-4dcb-8604-01869ba6859a-utilities\") pod \"community-operators-rqpwz\" (UID: \"555fee01-0d10-4dcb-8604-01869ba6859a\") " pod="openshift-marketplace/community-operators-rqpwz" Jan 26 14:36:12 crc kubenswrapper[4844]: I0126 14:36:12.536129 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54ds8\" (UniqueName: \"kubernetes.io/projected/555fee01-0d10-4dcb-8604-01869ba6859a-kube-api-access-54ds8\") pod \"community-operators-rqpwz\" (UID: \"555fee01-0d10-4dcb-8604-01869ba6859a\") " pod="openshift-marketplace/community-operators-rqpwz" Jan 26 14:36:12 crc kubenswrapper[4844]: I0126 14:36:12.536323 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/555fee01-0d10-4dcb-8604-01869ba6859a-catalog-content\") pod \"community-operators-rqpwz\" (UID: \"555fee01-0d10-4dcb-8604-01869ba6859a\") " pod="openshift-marketplace/community-operators-rqpwz" Jan 26 14:36:12 crc kubenswrapper[4844]: I0126 14:36:12.536846 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/555fee01-0d10-4dcb-8604-01869ba6859a-catalog-content\") pod \"community-operators-rqpwz\" (UID: \"555fee01-0d10-4dcb-8604-01869ba6859a\") " pod="openshift-marketplace/community-operators-rqpwz" Jan 26 14:36:12 crc kubenswrapper[4844]: I0126 14:36:12.536972 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/555fee01-0d10-4dcb-8604-01869ba6859a-utilities\") pod \"community-operators-rqpwz\" (UID: \"555fee01-0d10-4dcb-8604-01869ba6859a\") " pod="openshift-marketplace/community-operators-rqpwz" Jan 26 14:36:12 crc kubenswrapper[4844]: I0126 14:36:12.556431 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54ds8\" (UniqueName: \"kubernetes.io/projected/555fee01-0d10-4dcb-8604-01869ba6859a-kube-api-access-54ds8\") pod \"community-operators-rqpwz\" (UID: \"555fee01-0d10-4dcb-8604-01869ba6859a\") " pod="openshift-marketplace/community-operators-rqpwz" Jan 26 14:36:12 crc kubenswrapper[4844]: I0126 14:36:12.685422 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqpwz" Jan 26 14:36:13 crc kubenswrapper[4844]: I0126 14:36:13.349157 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rqpwz"] Jan 26 14:36:13 crc kubenswrapper[4844]: W0126 14:36:13.369652 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod555fee01_0d10_4dcb_8604_01869ba6859a.slice/crio-c9fb817d784773e5ec8e1c319232b8611fd2c02b03d42d26faaed233e8189e3b WatchSource:0}: Error finding container c9fb817d784773e5ec8e1c319232b8611fd2c02b03d42d26faaed233e8189e3b: Status 404 returned error can't find the container with id c9fb817d784773e5ec8e1c319232b8611fd2c02b03d42d26faaed233e8189e3b Jan 26 14:36:13 crc kubenswrapper[4844]: I0126 14:36:13.530842 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqpwz" event={"ID":"555fee01-0d10-4dcb-8604-01869ba6859a","Type":"ContainerStarted","Data":"c9fb817d784773e5ec8e1c319232b8611fd2c02b03d42d26faaed233e8189e3b"} Jan 26 14:36:14 crc kubenswrapper[4844]: I0126 14:36:14.543475 4844 generic.go:334] "Generic (PLEG): container finished" podID="72e8effa-04fc-44ec-8c29-661788db235f" containerID="6a21eb13e0d9cd0495bc335d492150ed9e20ddc896e37a88eac7d1f451d54ab4" exitCode=0 Jan 26 14:36:14 crc kubenswrapper[4844]: I0126 14:36:14.543678 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntgkj" event={"ID":"72e8effa-04fc-44ec-8c29-661788db235f","Type":"ContainerDied","Data":"6a21eb13e0d9cd0495bc335d492150ed9e20ddc896e37a88eac7d1f451d54ab4"} Jan 26 14:36:14 crc kubenswrapper[4844]: I0126 14:36:14.546039 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:36:14 crc kubenswrapper[4844]: I0126 14:36:14.548625 4844 generic.go:334] "Generic (PLEG): container finished" podID="555fee01-0d10-4dcb-8604-01869ba6859a" containerID="c50f0ea27894d1049bddbc1b7f4594417c9e31d9e6faa8249b5c885b9b261f26" exitCode=0 Jan 26 14:36:14 crc kubenswrapper[4844]: I0126 14:36:14.548671 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqpwz" event={"ID":"555fee01-0d10-4dcb-8604-01869ba6859a","Type":"ContainerDied","Data":"c50f0ea27894d1049bddbc1b7f4594417c9e31d9e6faa8249b5c885b9b261f26"} Jan 26 14:36:18 crc kubenswrapper[4844]: I0126 14:36:18.593588 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntgkj" event={"ID":"72e8effa-04fc-44ec-8c29-661788db235f","Type":"ContainerStarted","Data":"552d403ac7e10b76cb7495ccb7e6eeda73c965c71be62e12514a5d7cc0eac2a6"} Jan 26 14:36:18 crc kubenswrapper[4844]: I0126 14:36:18.595856 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqpwz" event={"ID":"555fee01-0d10-4dcb-8604-01869ba6859a","Type":"ContainerStarted","Data":"221d0fe57ddf79a149adc4d04773b8ca0a02361586cd0047eb7e8bc9fb861ed2"} Jan 26 14:36:19 crc kubenswrapper[4844]: I0126 14:36:19.605170 4844 generic.go:334] "Generic (PLEG): container finished" podID="72e8effa-04fc-44ec-8c29-661788db235f" containerID="552d403ac7e10b76cb7495ccb7e6eeda73c965c71be62e12514a5d7cc0eac2a6" exitCode=0 Jan 26 14:36:19 crc kubenswrapper[4844]: I0126 14:36:19.605274 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntgkj" event={"ID":"72e8effa-04fc-44ec-8c29-661788db235f","Type":"ContainerDied","Data":"552d403ac7e10b76cb7495ccb7e6eeda73c965c71be62e12514a5d7cc0eac2a6"} Jan 26 14:36:19 crc kubenswrapper[4844]: I0126 14:36:19.608182 4844 generic.go:334] "Generic (PLEG): container finished" podID="555fee01-0d10-4dcb-8604-01869ba6859a" containerID="221d0fe57ddf79a149adc4d04773b8ca0a02361586cd0047eb7e8bc9fb861ed2" exitCode=0 Jan 26 14:36:19 crc kubenswrapper[4844]: I0126 14:36:19.608222 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqpwz" event={"ID":"555fee01-0d10-4dcb-8604-01869ba6859a","Type":"ContainerDied","Data":"221d0fe57ddf79a149adc4d04773b8ca0a02361586cd0047eb7e8bc9fb861ed2"} Jan 26 14:36:20 crc kubenswrapper[4844]: I0126 14:36:20.621040 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntgkj" event={"ID":"72e8effa-04fc-44ec-8c29-661788db235f","Type":"ContainerStarted","Data":"24d3e23a385db5ebab3554809e865ff32453a811acc3e9a0508e0986e26d888e"} Jan 26 14:36:20 crc kubenswrapper[4844]: I0126 14:36:20.625247 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqpwz" event={"ID":"555fee01-0d10-4dcb-8604-01869ba6859a","Type":"ContainerStarted","Data":"6a5c4418794b25585b8d1878ff01549737236fd580683ba0d726fce23a4b3fb3"} Jan 26 14:36:20 crc kubenswrapper[4844]: I0126 14:36:20.640865 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ntgkj" podStartSLOduration=6.050033955 podStartE2EDuration="11.640847323s" podCreationTimestamp="2026-01-26 14:36:09 +0000 UTC" firstStartedPulling="2026-01-26 14:36:14.545522566 +0000 UTC m=+6751.478890198" lastFinishedPulling="2026-01-26 14:36:20.136335954 +0000 UTC m=+6757.069703566" observedRunningTime="2026-01-26 14:36:20.639178342 +0000 UTC m=+6757.572545984" watchObservedRunningTime="2026-01-26 14:36:20.640847323 +0000 UTC m=+6757.574214935" Jan 26 14:36:20 crc kubenswrapper[4844]: I0126 14:36:20.670311 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rqpwz" podStartSLOduration=2.9276835820000002 podStartE2EDuration="8.670280268s" podCreationTimestamp="2026-01-26 14:36:12 +0000 UTC" firstStartedPulling="2026-01-26 14:36:14.551256795 +0000 UTC m=+6751.484624417" lastFinishedPulling="2026-01-26 14:36:20.293853481 +0000 UTC m=+6757.227221103" observedRunningTime="2026-01-26 14:36:20.665879071 +0000 UTC m=+6757.599246703" watchObservedRunningTime="2026-01-26 14:36:20.670280268 +0000 UTC m=+6757.603647880" Jan 26 14:36:22 crc kubenswrapper[4844]: I0126 14:36:22.686569 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rqpwz" Jan 26 14:36:22 crc kubenswrapper[4844]: I0126 14:36:22.687085 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rqpwz" Jan 26 14:36:23 crc kubenswrapper[4844]: I0126 14:36:23.738804 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rqpwz" podUID="555fee01-0d10-4dcb-8604-01869ba6859a" containerName="registry-server" probeResult="failure" output=< Jan 26 14:36:23 crc kubenswrapper[4844]: timeout: failed to connect service ":50051" within 1s Jan 26 14:36:23 crc kubenswrapper[4844]: > Jan 26 14:36:30 crc kubenswrapper[4844]: I0126 14:36:30.237189 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ntgkj" Jan 26 14:36:30 crc kubenswrapper[4844]: I0126 14:36:30.238039 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ntgkj" Jan 26 14:36:30 crc kubenswrapper[4844]: I0126 14:36:30.309564 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ntgkj" Jan 26 14:36:30 crc kubenswrapper[4844]: I0126 14:36:30.777314 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ntgkj" Jan 26 14:36:30 crc kubenswrapper[4844]: I0126 14:36:30.840405 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ntgkj"] Jan 26 14:36:32 crc kubenswrapper[4844]: I0126 14:36:32.735827 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ntgkj" podUID="72e8effa-04fc-44ec-8c29-661788db235f" containerName="registry-server" containerID="cri-o://24d3e23a385db5ebab3554809e865ff32453a811acc3e9a0508e0986e26d888e" gracePeriod=2 Jan 26 14:36:32 crc kubenswrapper[4844]: I0126 14:36:32.740588 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rqpwz" Jan 26 14:36:32 crc kubenswrapper[4844]: I0126 14:36:32.789289 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rqpwz" Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.247208 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntgkj" Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.430800 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpc8w\" (UniqueName: \"kubernetes.io/projected/72e8effa-04fc-44ec-8c29-661788db235f-kube-api-access-wpc8w\") pod \"72e8effa-04fc-44ec-8c29-661788db235f\" (UID: \"72e8effa-04fc-44ec-8c29-661788db235f\") " Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.430983 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e8effa-04fc-44ec-8c29-661788db235f-utilities\") pod \"72e8effa-04fc-44ec-8c29-661788db235f\" (UID: \"72e8effa-04fc-44ec-8c29-661788db235f\") " Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.431103 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e8effa-04fc-44ec-8c29-661788db235f-catalog-content\") pod \"72e8effa-04fc-44ec-8c29-661788db235f\" (UID: \"72e8effa-04fc-44ec-8c29-661788db235f\") " Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.432027 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72e8effa-04fc-44ec-8c29-661788db235f-utilities" (OuterVolumeSpecName: "utilities") pod "72e8effa-04fc-44ec-8c29-661788db235f" (UID: "72e8effa-04fc-44ec-8c29-661788db235f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.433136 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e8effa-04fc-44ec-8c29-661788db235f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.437398 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72e8effa-04fc-44ec-8c29-661788db235f-kube-api-access-wpc8w" (OuterVolumeSpecName: "kube-api-access-wpc8w") pod "72e8effa-04fc-44ec-8c29-661788db235f" (UID: "72e8effa-04fc-44ec-8c29-661788db235f"). InnerVolumeSpecName "kube-api-access-wpc8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.479880 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72e8effa-04fc-44ec-8c29-661788db235f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72e8effa-04fc-44ec-8c29-661788db235f" (UID: "72e8effa-04fc-44ec-8c29-661788db235f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.535559 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e8effa-04fc-44ec-8c29-661788db235f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.535655 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpc8w\" (UniqueName: \"kubernetes.io/projected/72e8effa-04fc-44ec-8c29-661788db235f-kube-api-access-wpc8w\") on node \"crc\" DevicePath \"\"" Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.747904 4844 generic.go:334] "Generic (PLEG): container finished" podID="72e8effa-04fc-44ec-8c29-661788db235f" containerID="24d3e23a385db5ebab3554809e865ff32453a811acc3e9a0508e0986e26d888e" exitCode=0 Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.747970 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntgkj" Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.748005 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntgkj" event={"ID":"72e8effa-04fc-44ec-8c29-661788db235f","Type":"ContainerDied","Data":"24d3e23a385db5ebab3554809e865ff32453a811acc3e9a0508e0986e26d888e"} Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.748049 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntgkj" event={"ID":"72e8effa-04fc-44ec-8c29-661788db235f","Type":"ContainerDied","Data":"70f2ddccb9f92e0c415fc7c2c6f65b56bed3aa93ae945af0a66a042e36c24392"} Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.748072 4844 scope.go:117] "RemoveContainer" containerID="24d3e23a385db5ebab3554809e865ff32453a811acc3e9a0508e0986e26d888e" Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.784798 4844 scope.go:117] "RemoveContainer" containerID="552d403ac7e10b76cb7495ccb7e6eeda73c965c71be62e12514a5d7cc0eac2a6" Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.789642 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ntgkj"] Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.800511 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ntgkj"] Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.806622 4844 scope.go:117] "RemoveContainer" containerID="6a21eb13e0d9cd0495bc335d492150ed9e20ddc896e37a88eac7d1f451d54ab4" Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.853279 4844 scope.go:117] "RemoveContainer" containerID="24d3e23a385db5ebab3554809e865ff32453a811acc3e9a0508e0986e26d888e" Jan 26 14:36:33 crc kubenswrapper[4844]: E0126 14:36:33.854137 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24d3e23a385db5ebab3554809e865ff32453a811acc3e9a0508e0986e26d888e\": container with ID starting with 24d3e23a385db5ebab3554809e865ff32453a811acc3e9a0508e0986e26d888e not found: ID does not exist" containerID="24d3e23a385db5ebab3554809e865ff32453a811acc3e9a0508e0986e26d888e" Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.854185 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24d3e23a385db5ebab3554809e865ff32453a811acc3e9a0508e0986e26d888e"} err="failed to get container status \"24d3e23a385db5ebab3554809e865ff32453a811acc3e9a0508e0986e26d888e\": rpc error: code = NotFound desc = could not find container \"24d3e23a385db5ebab3554809e865ff32453a811acc3e9a0508e0986e26d888e\": container with ID starting with 24d3e23a385db5ebab3554809e865ff32453a811acc3e9a0508e0986e26d888e not found: ID does not exist" Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.854213 4844 scope.go:117] "RemoveContainer" containerID="552d403ac7e10b76cb7495ccb7e6eeda73c965c71be62e12514a5d7cc0eac2a6" Jan 26 14:36:33 crc kubenswrapper[4844]: E0126 14:36:33.855013 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"552d403ac7e10b76cb7495ccb7e6eeda73c965c71be62e12514a5d7cc0eac2a6\": container with ID starting with 552d403ac7e10b76cb7495ccb7e6eeda73c965c71be62e12514a5d7cc0eac2a6 not found: ID does not exist" containerID="552d403ac7e10b76cb7495ccb7e6eeda73c965c71be62e12514a5d7cc0eac2a6" Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.855078 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552d403ac7e10b76cb7495ccb7e6eeda73c965c71be62e12514a5d7cc0eac2a6"} err="failed to get container status \"552d403ac7e10b76cb7495ccb7e6eeda73c965c71be62e12514a5d7cc0eac2a6\": rpc error: code = NotFound desc = could not find container \"552d403ac7e10b76cb7495ccb7e6eeda73c965c71be62e12514a5d7cc0eac2a6\": container with ID starting with 552d403ac7e10b76cb7495ccb7e6eeda73c965c71be62e12514a5d7cc0eac2a6 not found: ID does not exist" Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.855101 4844 scope.go:117] "RemoveContainer" containerID="6a21eb13e0d9cd0495bc335d492150ed9e20ddc896e37a88eac7d1f451d54ab4" Jan 26 14:36:33 crc kubenswrapper[4844]: E0126 14:36:33.855450 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a21eb13e0d9cd0495bc335d492150ed9e20ddc896e37a88eac7d1f451d54ab4\": container with ID starting with 6a21eb13e0d9cd0495bc335d492150ed9e20ddc896e37a88eac7d1f451d54ab4 not found: ID does not exist" containerID="6a21eb13e0d9cd0495bc335d492150ed9e20ddc896e37a88eac7d1f451d54ab4" Jan 26 14:36:33 crc kubenswrapper[4844]: I0126 14:36:33.855476 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a21eb13e0d9cd0495bc335d492150ed9e20ddc896e37a88eac7d1f451d54ab4"} err="failed to get container status \"6a21eb13e0d9cd0495bc335d492150ed9e20ddc896e37a88eac7d1f451d54ab4\": rpc error: code = NotFound desc = could not find container \"6a21eb13e0d9cd0495bc335d492150ed9e20ddc896e37a88eac7d1f451d54ab4\": container with ID starting with 6a21eb13e0d9cd0495bc335d492150ed9e20ddc896e37a88eac7d1f451d54ab4 not found: ID does not exist" Jan 26 14:36:35 crc kubenswrapper[4844]: I0126 14:36:35.338563 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72e8effa-04fc-44ec-8c29-661788db235f" path="/var/lib/kubelet/pods/72e8effa-04fc-44ec-8c29-661788db235f/volumes" Jan 26 14:36:35 crc kubenswrapper[4844]: I0126 14:36:35.952034 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rqpwz"] Jan 26 14:36:35 crc kubenswrapper[4844]: I0126 14:36:35.952662 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rqpwz" podUID="555fee01-0d10-4dcb-8604-01869ba6859a" containerName="registry-server" containerID="cri-o://6a5c4418794b25585b8d1878ff01549737236fd580683ba0d726fce23a4b3fb3" gracePeriod=2 Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.418043 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqpwz" Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.598323 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/555fee01-0d10-4dcb-8604-01869ba6859a-catalog-content\") pod \"555fee01-0d10-4dcb-8604-01869ba6859a\" (UID: \"555fee01-0d10-4dcb-8604-01869ba6859a\") " Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.598422 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54ds8\" (UniqueName: \"kubernetes.io/projected/555fee01-0d10-4dcb-8604-01869ba6859a-kube-api-access-54ds8\") pod \"555fee01-0d10-4dcb-8604-01869ba6859a\" (UID: \"555fee01-0d10-4dcb-8604-01869ba6859a\") " Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.598510 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/555fee01-0d10-4dcb-8604-01869ba6859a-utilities\") pod \"555fee01-0d10-4dcb-8604-01869ba6859a\" (UID: \"555fee01-0d10-4dcb-8604-01869ba6859a\") " Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.599452 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/555fee01-0d10-4dcb-8604-01869ba6859a-utilities" (OuterVolumeSpecName: "utilities") pod "555fee01-0d10-4dcb-8604-01869ba6859a" (UID: "555fee01-0d10-4dcb-8604-01869ba6859a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.606880 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/555fee01-0d10-4dcb-8604-01869ba6859a-kube-api-access-54ds8" (OuterVolumeSpecName: "kube-api-access-54ds8") pod "555fee01-0d10-4dcb-8604-01869ba6859a" (UID: "555fee01-0d10-4dcb-8604-01869ba6859a"). InnerVolumeSpecName "kube-api-access-54ds8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.654464 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/555fee01-0d10-4dcb-8604-01869ba6859a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "555fee01-0d10-4dcb-8604-01869ba6859a" (UID: "555fee01-0d10-4dcb-8604-01869ba6859a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.700840 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/555fee01-0d10-4dcb-8604-01869ba6859a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.700876 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54ds8\" (UniqueName: \"kubernetes.io/projected/555fee01-0d10-4dcb-8604-01869ba6859a-kube-api-access-54ds8\") on node \"crc\" DevicePath \"\"" Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.700888 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/555fee01-0d10-4dcb-8604-01869ba6859a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.781784 4844 generic.go:334] "Generic (PLEG): container finished" podID="555fee01-0d10-4dcb-8604-01869ba6859a" containerID="6a5c4418794b25585b8d1878ff01549737236fd580683ba0d726fce23a4b3fb3" exitCode=0 Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.781829 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqpwz" event={"ID":"555fee01-0d10-4dcb-8604-01869ba6859a","Type":"ContainerDied","Data":"6a5c4418794b25585b8d1878ff01549737236fd580683ba0d726fce23a4b3fb3"} Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.781856 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqpwz" event={"ID":"555fee01-0d10-4dcb-8604-01869ba6859a","Type":"ContainerDied","Data":"c9fb817d784773e5ec8e1c319232b8611fd2c02b03d42d26faaed233e8189e3b"} Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.781872 4844 scope.go:117] "RemoveContainer" containerID="6a5c4418794b25585b8d1878ff01549737236fd580683ba0d726fce23a4b3fb3" Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.781993 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqpwz" Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.820822 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rqpwz"] Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.827148 4844 scope.go:117] "RemoveContainer" containerID="221d0fe57ddf79a149adc4d04773b8ca0a02361586cd0047eb7e8bc9fb861ed2" Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.828742 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rqpwz"] Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.857033 4844 scope.go:117] "RemoveContainer" containerID="c50f0ea27894d1049bddbc1b7f4594417c9e31d9e6faa8249b5c885b9b261f26" Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.900641 4844 scope.go:117] "RemoveContainer" containerID="6a5c4418794b25585b8d1878ff01549737236fd580683ba0d726fce23a4b3fb3" Jan 26 14:36:36 crc kubenswrapper[4844]: E0126 14:36:36.900998 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a5c4418794b25585b8d1878ff01549737236fd580683ba0d726fce23a4b3fb3\": container with ID starting with 6a5c4418794b25585b8d1878ff01549737236fd580683ba0d726fce23a4b3fb3 not found: ID does not exist" containerID="6a5c4418794b25585b8d1878ff01549737236fd580683ba0d726fce23a4b3fb3" Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.901046 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a5c4418794b25585b8d1878ff01549737236fd580683ba0d726fce23a4b3fb3"} err="failed to get container status \"6a5c4418794b25585b8d1878ff01549737236fd580683ba0d726fce23a4b3fb3\": rpc error: code = NotFound desc = could not find container \"6a5c4418794b25585b8d1878ff01549737236fd580683ba0d726fce23a4b3fb3\": container with ID starting with 6a5c4418794b25585b8d1878ff01549737236fd580683ba0d726fce23a4b3fb3 not found: ID does not exist" Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.901073 4844 scope.go:117] "RemoveContainer" containerID="221d0fe57ddf79a149adc4d04773b8ca0a02361586cd0047eb7e8bc9fb861ed2" Jan 26 14:36:36 crc kubenswrapper[4844]: E0126 14:36:36.901450 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"221d0fe57ddf79a149adc4d04773b8ca0a02361586cd0047eb7e8bc9fb861ed2\": container with ID starting with 221d0fe57ddf79a149adc4d04773b8ca0a02361586cd0047eb7e8bc9fb861ed2 not found: ID does not exist" containerID="221d0fe57ddf79a149adc4d04773b8ca0a02361586cd0047eb7e8bc9fb861ed2" Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.901490 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"221d0fe57ddf79a149adc4d04773b8ca0a02361586cd0047eb7e8bc9fb861ed2"} err="failed to get container status \"221d0fe57ddf79a149adc4d04773b8ca0a02361586cd0047eb7e8bc9fb861ed2\": rpc error: code = NotFound desc = could not find container \"221d0fe57ddf79a149adc4d04773b8ca0a02361586cd0047eb7e8bc9fb861ed2\": container with ID starting with 221d0fe57ddf79a149adc4d04773b8ca0a02361586cd0047eb7e8bc9fb861ed2 not found: ID does not exist" Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.901522 4844 scope.go:117] "RemoveContainer" containerID="c50f0ea27894d1049bddbc1b7f4594417c9e31d9e6faa8249b5c885b9b261f26" Jan 26 14:36:36 crc kubenswrapper[4844]: E0126 14:36:36.901798 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c50f0ea27894d1049bddbc1b7f4594417c9e31d9e6faa8249b5c885b9b261f26\": container with ID starting with c50f0ea27894d1049bddbc1b7f4594417c9e31d9e6faa8249b5c885b9b261f26 not found: ID does not exist" containerID="c50f0ea27894d1049bddbc1b7f4594417c9e31d9e6faa8249b5c885b9b261f26" Jan 26 14:36:36 crc kubenswrapper[4844]: I0126 14:36:36.901827 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c50f0ea27894d1049bddbc1b7f4594417c9e31d9e6faa8249b5c885b9b261f26"} err="failed to get container status \"c50f0ea27894d1049bddbc1b7f4594417c9e31d9e6faa8249b5c885b9b261f26\": rpc error: code = NotFound desc = could not find container \"c50f0ea27894d1049bddbc1b7f4594417c9e31d9e6faa8249b5c885b9b261f26\": container with ID starting with c50f0ea27894d1049bddbc1b7f4594417c9e31d9e6faa8249b5c885b9b261f26 not found: ID does not exist" Jan 26 14:36:37 crc kubenswrapper[4844]: I0126 14:36:37.340673 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="555fee01-0d10-4dcb-8604-01869ba6859a" path="/var/lib/kubelet/pods/555fee01-0d10-4dcb-8604-01869ba6859a/volumes" Jan 26 14:37:27 crc kubenswrapper[4844]: I0126 14:37:27.324587 4844 generic.go:334] "Generic (PLEG): container finished" podID="f617457c-8f1e-4508-926e-bb6b77ea7444" containerID="16c2280421c445b588fa8215f65a400cc022d8f73da61eb52339462ea12392b6" exitCode=0 Jan 26 14:37:27 crc kubenswrapper[4844]: I0126 14:37:27.326264 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f617457c-8f1e-4508-926e-bb6b77ea7444","Type":"ContainerDied","Data":"16c2280421c445b588fa8215f65a400cc022d8f73da61eb52339462ea12392b6"} Jan 26 14:37:28 crc kubenswrapper[4844]: I0126 14:37:28.736996 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 14:37:28 crc kubenswrapper[4844]: I0126 14:37:28.904842 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f617457c-8f1e-4508-926e-bb6b77ea7444-openstack-config\") pod \"f617457c-8f1e-4508-926e-bb6b77ea7444\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " Jan 26 14:37:28 crc kubenswrapper[4844]: I0126 14:37:28.904921 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f617457c-8f1e-4508-926e-bb6b77ea7444-config-data\") pod \"f617457c-8f1e-4508-926e-bb6b77ea7444\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " Jan 26 14:37:28 crc kubenswrapper[4844]: I0126 14:37:28.904955 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f617457c-8f1e-4508-926e-bb6b77ea7444-test-operator-ephemeral-temporary\") pod \"f617457c-8f1e-4508-926e-bb6b77ea7444\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " Jan 26 14:37:28 crc kubenswrapper[4844]: I0126 14:37:28.905088 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"f617457c-8f1e-4508-926e-bb6b77ea7444\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " Jan 26 14:37:28 crc kubenswrapper[4844]: I0126 14:37:28.905137 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f617457c-8f1e-4508-926e-bb6b77ea7444-ca-certs\") pod \"f617457c-8f1e-4508-926e-bb6b77ea7444\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " Jan 26 14:37:28 crc kubenswrapper[4844]: I0126 14:37:28.905164 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trrvz\" (UniqueName: \"kubernetes.io/projected/f617457c-8f1e-4508-926e-bb6b77ea7444-kube-api-access-trrvz\") pod \"f617457c-8f1e-4508-926e-bb6b77ea7444\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " Jan 26 14:37:28 crc kubenswrapper[4844]: I0126 14:37:28.905338 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f617457c-8f1e-4508-926e-bb6b77ea7444-openstack-config-secret\") pod \"f617457c-8f1e-4508-926e-bb6b77ea7444\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " Jan 26 14:37:28 crc kubenswrapper[4844]: I0126 14:37:28.905405 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f617457c-8f1e-4508-926e-bb6b77ea7444-ssh-key\") pod \"f617457c-8f1e-4508-926e-bb6b77ea7444\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " Jan 26 14:37:28 crc kubenswrapper[4844]: I0126 14:37:28.905439 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f617457c-8f1e-4508-926e-bb6b77ea7444-test-operator-ephemeral-workdir\") pod \"f617457c-8f1e-4508-926e-bb6b77ea7444\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " Jan 26 14:37:28 crc kubenswrapper[4844]: I0126 14:37:28.905887 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f617457c-8f1e-4508-926e-bb6b77ea7444-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "f617457c-8f1e-4508-926e-bb6b77ea7444" (UID: "f617457c-8f1e-4508-926e-bb6b77ea7444"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:37:28 crc kubenswrapper[4844]: I0126 14:37:28.906100 4844 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f617457c-8f1e-4508-926e-bb6b77ea7444-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 26 14:37:28 crc kubenswrapper[4844]: I0126 14:37:28.906968 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f617457c-8f1e-4508-926e-bb6b77ea7444-config-data" (OuterVolumeSpecName: "config-data") pod "f617457c-8f1e-4508-926e-bb6b77ea7444" (UID: "f617457c-8f1e-4508-926e-bb6b77ea7444"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:37:28 crc kubenswrapper[4844]: I0126 14:37:28.913936 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "f617457c-8f1e-4508-926e-bb6b77ea7444" (UID: "f617457c-8f1e-4508-926e-bb6b77ea7444"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 14:37:28 crc kubenswrapper[4844]: I0126 14:37:28.914101 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f617457c-8f1e-4508-926e-bb6b77ea7444-kube-api-access-trrvz" (OuterVolumeSpecName: "kube-api-access-trrvz") pod "f617457c-8f1e-4508-926e-bb6b77ea7444" (UID: "f617457c-8f1e-4508-926e-bb6b77ea7444"). InnerVolumeSpecName "kube-api-access-trrvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:37:28 crc kubenswrapper[4844]: I0126 14:37:28.915581 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f617457c-8f1e-4508-926e-bb6b77ea7444-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "f617457c-8f1e-4508-926e-bb6b77ea7444" (UID: "f617457c-8f1e-4508-926e-bb6b77ea7444"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:37:28 crc kubenswrapper[4844]: I0126 14:37:28.971327 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f617457c-8f1e-4508-926e-bb6b77ea7444-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "f617457c-8f1e-4508-926e-bb6b77ea7444" (UID: "f617457c-8f1e-4508-926e-bb6b77ea7444"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:37:28 crc kubenswrapper[4844]: I0126 14:37:28.972928 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f617457c-8f1e-4508-926e-bb6b77ea7444-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "f617457c-8f1e-4508-926e-bb6b77ea7444" (UID: "f617457c-8f1e-4508-926e-bb6b77ea7444"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:37:29 crc kubenswrapper[4844]: I0126 14:37:29.007224 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f617457c-8f1e-4508-926e-bb6b77ea7444-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "f617457c-8f1e-4508-926e-bb6b77ea7444" (UID: "f617457c-8f1e-4508-926e-bb6b77ea7444"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:37:29 crc kubenswrapper[4844]: I0126 14:37:29.007382 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f617457c-8f1e-4508-926e-bb6b77ea7444-openstack-config\") pod \"f617457c-8f1e-4508-926e-bb6b77ea7444\" (UID: \"f617457c-8f1e-4508-926e-bb6b77ea7444\") " Jan 26 14:37:29 crc kubenswrapper[4844]: I0126 14:37:29.008049 4844 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f617457c-8f1e-4508-926e-bb6b77ea7444-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 14:37:29 crc kubenswrapper[4844]: I0126 14:37:29.008075 4844 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f617457c-8f1e-4508-926e-bb6b77ea7444-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 26 14:37:29 crc kubenswrapper[4844]: I0126 14:37:29.008089 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f617457c-8f1e-4508-926e-bb6b77ea7444-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:37:29 crc kubenswrapper[4844]: I0126 14:37:29.008122 4844 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 26 14:37:29 crc kubenswrapper[4844]: I0126 14:37:29.008135 4844 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f617457c-8f1e-4508-926e-bb6b77ea7444-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:37:29 crc kubenswrapper[4844]: I0126 14:37:29.008146 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trrvz\" (UniqueName: \"kubernetes.io/projected/f617457c-8f1e-4508-926e-bb6b77ea7444-kube-api-access-trrvz\") on node \"crc\" DevicePath \"\"" Jan 26 14:37:29 crc kubenswrapper[4844]: W0126 14:37:29.009357 4844 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f617457c-8f1e-4508-926e-bb6b77ea7444/volumes/kubernetes.io~configmap/openstack-config Jan 26 14:37:29 crc kubenswrapper[4844]: I0126 14:37:29.009381 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f617457c-8f1e-4508-926e-bb6b77ea7444-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "f617457c-8f1e-4508-926e-bb6b77ea7444" (UID: "f617457c-8f1e-4508-926e-bb6b77ea7444"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:37:29 crc kubenswrapper[4844]: I0126 14:37:29.017752 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f617457c-8f1e-4508-926e-bb6b77ea7444-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f617457c-8f1e-4508-926e-bb6b77ea7444" (UID: "f617457c-8f1e-4508-926e-bb6b77ea7444"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:37:29 crc kubenswrapper[4844]: I0126 14:37:29.040129 4844 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 26 14:37:29 crc kubenswrapper[4844]: I0126 14:37:29.110171 4844 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f617457c-8f1e-4508-926e-bb6b77ea7444-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 26 14:37:29 crc kubenswrapper[4844]: I0126 14:37:29.110638 4844 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f617457c-8f1e-4508-926e-bb6b77ea7444-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:37:29 crc kubenswrapper[4844]: I0126 14:37:29.110653 4844 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 26 14:37:29 crc kubenswrapper[4844]: I0126 14:37:29.349346 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f617457c-8f1e-4508-926e-bb6b77ea7444","Type":"ContainerDied","Data":"4cbc8cbd3237ba23738eb4e3e827c47fd792e471d4c4100dceada17ef6fcdb90"} Jan 26 14:37:29 crc kubenswrapper[4844]: I0126 14:37:29.349391 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cbc8cbd3237ba23738eb4e3e827c47fd792e471d4c4100dceada17ef6fcdb90" Jan 26 14:37:29 crc kubenswrapper[4844]: I0126 14:37:29.349472 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.456456 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 14:37:41 crc kubenswrapper[4844]: E0126 14:37:41.457484 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e8effa-04fc-44ec-8c29-661788db235f" containerName="registry-server" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.457503 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e8effa-04fc-44ec-8c29-661788db235f" containerName="registry-server" Jan 26 14:37:41 crc kubenswrapper[4844]: E0126 14:37:41.457520 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555fee01-0d10-4dcb-8604-01869ba6859a" containerName="extract-utilities" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.457529 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="555fee01-0d10-4dcb-8604-01869ba6859a" containerName="extract-utilities" Jan 26 14:37:41 crc kubenswrapper[4844]: E0126 14:37:41.457550 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e8effa-04fc-44ec-8c29-661788db235f" containerName="extract-utilities" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.457558 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e8effa-04fc-44ec-8c29-661788db235f" containerName="extract-utilities" Jan 26 14:37:41 crc kubenswrapper[4844]: E0126 14:37:41.457572 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555fee01-0d10-4dcb-8604-01869ba6859a" containerName="extract-content" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.457582 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="555fee01-0d10-4dcb-8604-01869ba6859a" containerName="extract-content" Jan 26 14:37:41 crc kubenswrapper[4844]: E0126 14:37:41.457621 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e8effa-04fc-44ec-8c29-661788db235f" containerName="extract-content" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.457632 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e8effa-04fc-44ec-8c29-661788db235f" containerName="extract-content" Jan 26 14:37:41 crc kubenswrapper[4844]: E0126 14:37:41.457651 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555fee01-0d10-4dcb-8604-01869ba6859a" containerName="registry-server" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.457659 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="555fee01-0d10-4dcb-8604-01869ba6859a" containerName="registry-server" Jan 26 14:37:41 crc kubenswrapper[4844]: E0126 14:37:41.457690 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f617457c-8f1e-4508-926e-bb6b77ea7444" containerName="tempest-tests-tempest-tests-runner" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.457698 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="f617457c-8f1e-4508-926e-bb6b77ea7444" containerName="tempest-tests-tempest-tests-runner" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.457957 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="f617457c-8f1e-4508-926e-bb6b77ea7444" containerName="tempest-tests-tempest-tests-runner" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.457982 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="555fee01-0d10-4dcb-8604-01869ba6859a" containerName="registry-server" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.457994 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="72e8effa-04fc-44ec-8c29-661788db235f" containerName="registry-server" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.458862 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.461372 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-j2592" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.467815 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.502785 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jxfd\" (UniqueName: \"kubernetes.io/projected/a4920a59-74e4-4ac3-b437-3dbd074758d7-kube-api-access-4jxfd\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a4920a59-74e4-4ac3-b437-3dbd074758d7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.503106 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a4920a59-74e4-4ac3-b437-3dbd074758d7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.605238 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a4920a59-74e4-4ac3-b437-3dbd074758d7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.605394 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jxfd\" (UniqueName: \"kubernetes.io/projected/a4920a59-74e4-4ac3-b437-3dbd074758d7-kube-api-access-4jxfd\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a4920a59-74e4-4ac3-b437-3dbd074758d7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.605687 4844 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a4920a59-74e4-4ac3-b437-3dbd074758d7\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.625550 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jxfd\" (UniqueName: \"kubernetes.io/projected/a4920a59-74e4-4ac3-b437-3dbd074758d7-kube-api-access-4jxfd\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a4920a59-74e4-4ac3-b437-3dbd074758d7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.640317 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a4920a59-74e4-4ac3-b437-3dbd074758d7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 14:37:41 crc kubenswrapper[4844]: I0126 14:37:41.789289 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 14:37:42 crc kubenswrapper[4844]: I0126 14:37:42.314979 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 14:37:42 crc kubenswrapper[4844]: I0126 14:37:42.492009 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"a4920a59-74e4-4ac3-b437-3dbd074758d7","Type":"ContainerStarted","Data":"fe4d96fc4226665ed5622d41e82a0dceed1ed43b600eda07fd126889b09d951d"} Jan 26 14:37:44 crc kubenswrapper[4844]: I0126 14:37:44.521101 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"a4920a59-74e4-4ac3-b437-3dbd074758d7","Type":"ContainerStarted","Data":"e9e6febf82872ada73cbf56024356392dd5a2250d9860101371c5e5a15dc8b1b"} Jan 26 14:37:44 crc kubenswrapper[4844]: I0126 14:37:44.545754 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.652725261 podStartE2EDuration="3.545577943s" podCreationTimestamp="2026-01-26 14:37:41 +0000 UTC" firstStartedPulling="2026-01-26 14:37:42.327387875 +0000 UTC m=+6839.260755487" lastFinishedPulling="2026-01-26 14:37:44.220240557 +0000 UTC m=+6841.153608169" observedRunningTime="2026-01-26 14:37:44.533757425 +0000 UTC m=+6841.467125057" watchObservedRunningTime="2026-01-26 14:37:44.545577943 +0000 UTC m=+6841.478945555" Jan 26 14:38:06 crc kubenswrapper[4844]: I0126 14:38:06.364457 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:38:06 crc kubenswrapper[4844]: I0126 14:38:06.365011 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:38:06 crc kubenswrapper[4844]: I0126 14:38:06.515517 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jzmk8"] Jan 26 14:38:06 crc kubenswrapper[4844]: I0126 14:38:06.519187 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzmk8" Jan 26 14:38:06 crc kubenswrapper[4844]: I0126 14:38:06.527236 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jzmk8"] Jan 26 14:38:06 crc kubenswrapper[4844]: I0126 14:38:06.706881 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27db865-6e34-4b87-8ff2-03d0554daae5-utilities\") pod \"redhat-operators-jzmk8\" (UID: \"e27db865-6e34-4b87-8ff2-03d0554daae5\") " pod="openshift-marketplace/redhat-operators-jzmk8" Jan 26 14:38:06 crc kubenswrapper[4844]: I0126 14:38:06.707173 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27db865-6e34-4b87-8ff2-03d0554daae5-catalog-content\") pod \"redhat-operators-jzmk8\" (UID: \"e27db865-6e34-4b87-8ff2-03d0554daae5\") " pod="openshift-marketplace/redhat-operators-jzmk8" Jan 26 14:38:06 crc kubenswrapper[4844]: I0126 14:38:06.707245 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5sj8\" (UniqueName: \"kubernetes.io/projected/e27db865-6e34-4b87-8ff2-03d0554daae5-kube-api-access-h5sj8\") pod \"redhat-operators-jzmk8\" (UID: \"e27db865-6e34-4b87-8ff2-03d0554daae5\") " pod="openshift-marketplace/redhat-operators-jzmk8" Jan 26 14:38:06 crc kubenswrapper[4844]: I0126 14:38:06.809518 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27db865-6e34-4b87-8ff2-03d0554daae5-utilities\") pod \"redhat-operators-jzmk8\" (UID: \"e27db865-6e34-4b87-8ff2-03d0554daae5\") " pod="openshift-marketplace/redhat-operators-jzmk8" Jan 26 14:38:06 crc kubenswrapper[4844]: I0126 14:38:06.809635 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27db865-6e34-4b87-8ff2-03d0554daae5-catalog-content\") pod \"redhat-operators-jzmk8\" (UID: \"e27db865-6e34-4b87-8ff2-03d0554daae5\") " pod="openshift-marketplace/redhat-operators-jzmk8" Jan 26 14:38:06 crc kubenswrapper[4844]: I0126 14:38:06.809706 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5sj8\" (UniqueName: \"kubernetes.io/projected/e27db865-6e34-4b87-8ff2-03d0554daae5-kube-api-access-h5sj8\") pod \"redhat-operators-jzmk8\" (UID: \"e27db865-6e34-4b87-8ff2-03d0554daae5\") " pod="openshift-marketplace/redhat-operators-jzmk8" Jan 26 14:38:06 crc kubenswrapper[4844]: I0126 14:38:06.810244 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27db865-6e34-4b87-8ff2-03d0554daae5-catalog-content\") pod \"redhat-operators-jzmk8\" (UID: \"e27db865-6e34-4b87-8ff2-03d0554daae5\") " pod="openshift-marketplace/redhat-operators-jzmk8" Jan 26 14:38:06 crc kubenswrapper[4844]: I0126 14:38:06.810372 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27db865-6e34-4b87-8ff2-03d0554daae5-utilities\") pod \"redhat-operators-jzmk8\" (UID: \"e27db865-6e34-4b87-8ff2-03d0554daae5\") " pod="openshift-marketplace/redhat-operators-jzmk8" Jan 26 14:38:06 crc kubenswrapper[4844]: I0126 14:38:06.835085 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5sj8\" (UniqueName: \"kubernetes.io/projected/e27db865-6e34-4b87-8ff2-03d0554daae5-kube-api-access-h5sj8\") pod \"redhat-operators-jzmk8\" (UID: \"e27db865-6e34-4b87-8ff2-03d0554daae5\") " pod="openshift-marketplace/redhat-operators-jzmk8" Jan 26 14:38:06 crc kubenswrapper[4844]: I0126 14:38:06.849091 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzmk8" Jan 26 14:38:07 crc kubenswrapper[4844]: I0126 14:38:07.324224 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jzmk8"] Jan 26 14:38:07 crc kubenswrapper[4844]: W0126 14:38:07.330677 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode27db865_6e34_4b87_8ff2_03d0554daae5.slice/crio-c79ca2977eae04b6be2b7c4dad968d03679cbada79c3b5c9ab4e644dea54395d WatchSource:0}: Error finding container c79ca2977eae04b6be2b7c4dad968d03679cbada79c3b5c9ab4e644dea54395d: Status 404 returned error can't find the container with id c79ca2977eae04b6be2b7c4dad968d03679cbada79c3b5c9ab4e644dea54395d Jan 26 14:38:07 crc kubenswrapper[4844]: I0126 14:38:07.770333 4844 generic.go:334] "Generic (PLEG): container finished" podID="e27db865-6e34-4b87-8ff2-03d0554daae5" containerID="121610d4d32f4c2251b256dc2ec431f4f7d61d81a6f1de57264074cc112283bf" exitCode=0 Jan 26 14:38:07 crc kubenswrapper[4844]: I0126 14:38:07.770423 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzmk8" event={"ID":"e27db865-6e34-4b87-8ff2-03d0554daae5","Type":"ContainerDied","Data":"121610d4d32f4c2251b256dc2ec431f4f7d61d81a6f1de57264074cc112283bf"} Jan 26 14:38:07 crc kubenswrapper[4844]: I0126 14:38:07.770652 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzmk8" event={"ID":"e27db865-6e34-4b87-8ff2-03d0554daae5","Type":"ContainerStarted","Data":"c79ca2977eae04b6be2b7c4dad968d03679cbada79c3b5c9ab4e644dea54395d"} Jan 26 14:38:08 crc kubenswrapper[4844]: I0126 14:38:08.782676 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzmk8" event={"ID":"e27db865-6e34-4b87-8ff2-03d0554daae5","Type":"ContainerStarted","Data":"7aba1a6cbcb05b849a2fe6fd4c30e61dc6ff92469b0bc440d3a70b6e2b1a9b17"} Jan 26 14:38:10 crc kubenswrapper[4844]: I0126 14:38:10.162076 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cwwwg/must-gather-dhvsk"] Jan 26 14:38:10 crc kubenswrapper[4844]: I0126 14:38:10.164127 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cwwwg/must-gather-dhvsk" Jan 26 14:38:10 crc kubenswrapper[4844]: I0126 14:38:10.166968 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-cwwwg"/"default-dockercfg-74pn5" Jan 26 14:38:10 crc kubenswrapper[4844]: I0126 14:38:10.167061 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cwwwg"/"openshift-service-ca.crt" Jan 26 14:38:10 crc kubenswrapper[4844]: I0126 14:38:10.167743 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cwwwg"/"kube-root-ca.crt" Jan 26 14:38:10 crc kubenswrapper[4844]: I0126 14:38:10.180105 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cwwwg/must-gather-dhvsk"] Jan 26 14:38:10 crc kubenswrapper[4844]: I0126 14:38:10.292096 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhr8g\" (UniqueName: \"kubernetes.io/projected/1674b4f8-c352-44c3-a14a-f81e006c3586-kube-api-access-bhr8g\") pod \"must-gather-dhvsk\" (UID: \"1674b4f8-c352-44c3-a14a-f81e006c3586\") " pod="openshift-must-gather-cwwwg/must-gather-dhvsk" Jan 26 14:38:10 crc kubenswrapper[4844]: I0126 14:38:10.292311 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1674b4f8-c352-44c3-a14a-f81e006c3586-must-gather-output\") pod \"must-gather-dhvsk\" (UID: \"1674b4f8-c352-44c3-a14a-f81e006c3586\") " pod="openshift-must-gather-cwwwg/must-gather-dhvsk" Jan 26 14:38:10 crc kubenswrapper[4844]: I0126 14:38:10.394894 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1674b4f8-c352-44c3-a14a-f81e006c3586-must-gather-output\") pod \"must-gather-dhvsk\" (UID: \"1674b4f8-c352-44c3-a14a-f81e006c3586\") " pod="openshift-must-gather-cwwwg/must-gather-dhvsk" Jan 26 14:38:10 crc kubenswrapper[4844]: I0126 14:38:10.395130 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhr8g\" (UniqueName: \"kubernetes.io/projected/1674b4f8-c352-44c3-a14a-f81e006c3586-kube-api-access-bhr8g\") pod \"must-gather-dhvsk\" (UID: \"1674b4f8-c352-44c3-a14a-f81e006c3586\") " pod="openshift-must-gather-cwwwg/must-gather-dhvsk" Jan 26 14:38:10 crc kubenswrapper[4844]: I0126 14:38:10.395337 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1674b4f8-c352-44c3-a14a-f81e006c3586-must-gather-output\") pod \"must-gather-dhvsk\" (UID: \"1674b4f8-c352-44c3-a14a-f81e006c3586\") " pod="openshift-must-gather-cwwwg/must-gather-dhvsk" Jan 26 14:38:10 crc kubenswrapper[4844]: I0126 14:38:10.417618 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhr8g\" (UniqueName: \"kubernetes.io/projected/1674b4f8-c352-44c3-a14a-f81e006c3586-kube-api-access-bhr8g\") pod \"must-gather-dhvsk\" (UID: \"1674b4f8-c352-44c3-a14a-f81e006c3586\") " pod="openshift-must-gather-cwwwg/must-gather-dhvsk" Jan 26 14:38:10 crc kubenswrapper[4844]: I0126 14:38:10.479631 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cwwwg/must-gather-dhvsk" Jan 26 14:38:10 crc kubenswrapper[4844]: I0126 14:38:10.931455 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cwwwg/must-gather-dhvsk"] Jan 26 14:38:11 crc kubenswrapper[4844]: I0126 14:38:11.808698 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cwwwg/must-gather-dhvsk" event={"ID":"1674b4f8-c352-44c3-a14a-f81e006c3586","Type":"ContainerStarted","Data":"cb2b87b44fbccf0a4ef4c1fe78b812913a9b15608192633011cabcb5f5ca84e3"} Jan 26 14:38:13 crc kubenswrapper[4844]: I0126 14:38:13.833547 4844 generic.go:334] "Generic (PLEG): container finished" podID="e27db865-6e34-4b87-8ff2-03d0554daae5" containerID="7aba1a6cbcb05b849a2fe6fd4c30e61dc6ff92469b0bc440d3a70b6e2b1a9b17" exitCode=0 Jan 26 14:38:13 crc kubenswrapper[4844]: I0126 14:38:13.833625 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzmk8" event={"ID":"e27db865-6e34-4b87-8ff2-03d0554daae5","Type":"ContainerDied","Data":"7aba1a6cbcb05b849a2fe6fd4c30e61dc6ff92469b0bc440d3a70b6e2b1a9b17"} Jan 26 14:38:15 crc kubenswrapper[4844]: I0126 14:38:15.872767 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzmk8" event={"ID":"e27db865-6e34-4b87-8ff2-03d0554daae5","Type":"ContainerStarted","Data":"9ff5afa5d821164119f1b2cb7ae674b3b8a4d45fb1f5f88297acd7eaf6db13b8"} Jan 26 14:38:15 crc kubenswrapper[4844]: I0126 14:38:15.897111 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jzmk8" podStartSLOduration=2.372570059 podStartE2EDuration="9.897086893s" podCreationTimestamp="2026-01-26 14:38:06 +0000 UTC" firstStartedPulling="2026-01-26 14:38:07.772266522 +0000 UTC m=+6864.705634154" lastFinishedPulling="2026-01-26 14:38:15.296783376 +0000 UTC m=+6872.230150988" observedRunningTime="2026-01-26 14:38:15.892524951 +0000 UTC m=+6872.825892563" watchObservedRunningTime="2026-01-26 14:38:15.897086893 +0000 UTC m=+6872.830454505" Jan 26 14:38:16 crc kubenswrapper[4844]: I0126 14:38:16.850083 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jzmk8" Jan 26 14:38:16 crc kubenswrapper[4844]: I0126 14:38:16.850464 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jzmk8" Jan 26 14:38:17 crc kubenswrapper[4844]: I0126 14:38:17.895004 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jzmk8" podUID="e27db865-6e34-4b87-8ff2-03d0554daae5" containerName="registry-server" probeResult="failure" output=< Jan 26 14:38:17 crc kubenswrapper[4844]: timeout: failed to connect service ":50051" within 1s Jan 26 14:38:17 crc kubenswrapper[4844]: > Jan 26 14:38:23 crc kubenswrapper[4844]: I0126 14:38:23.968887 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cwwwg/must-gather-dhvsk" event={"ID":"1674b4f8-c352-44c3-a14a-f81e006c3586","Type":"ContainerStarted","Data":"10405af140884f01e92023eb147986bc6696b13c12350b1eae03ca6376d1e90f"} Jan 26 14:38:23 crc kubenswrapper[4844]: I0126 14:38:23.969665 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cwwwg/must-gather-dhvsk" event={"ID":"1674b4f8-c352-44c3-a14a-f81e006c3586","Type":"ContainerStarted","Data":"4a69a08b8984212e046f23d1d0ae2f908bde143d91582d552c7c9ea8404e9554"} Jan 26 14:38:23 crc kubenswrapper[4844]: I0126 14:38:23.997775 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cwwwg/must-gather-dhvsk" podStartSLOduration=1.834819695 podStartE2EDuration="13.997748516s" podCreationTimestamp="2026-01-26 14:38:10 +0000 UTC" firstStartedPulling="2026-01-26 14:38:10.938368283 +0000 UTC m=+6867.871735895" lastFinishedPulling="2026-01-26 14:38:23.101297094 +0000 UTC m=+6880.034664716" observedRunningTime="2026-01-26 14:38:23.991573066 +0000 UTC m=+6880.924940688" watchObservedRunningTime="2026-01-26 14:38:23.997748516 +0000 UTC m=+6880.931116148" Jan 26 14:38:27 crc kubenswrapper[4844]: I0126 14:38:27.191647 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cwwwg/crc-debug-75gst"] Jan 26 14:38:27 crc kubenswrapper[4844]: I0126 14:38:27.194354 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cwwwg/crc-debug-75gst" Jan 26 14:38:27 crc kubenswrapper[4844]: I0126 14:38:27.292115 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4772ac3-d441-467c-9f50-161ac1604fc9-host\") pod \"crc-debug-75gst\" (UID: \"e4772ac3-d441-467c-9f50-161ac1604fc9\") " pod="openshift-must-gather-cwwwg/crc-debug-75gst" Jan 26 14:38:27 crc kubenswrapper[4844]: I0126 14:38:27.292296 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz9xl\" (UniqueName: \"kubernetes.io/projected/e4772ac3-d441-467c-9f50-161ac1604fc9-kube-api-access-rz9xl\") pod \"crc-debug-75gst\" (UID: \"e4772ac3-d441-467c-9f50-161ac1604fc9\") " pod="openshift-must-gather-cwwwg/crc-debug-75gst" Jan 26 14:38:27 crc kubenswrapper[4844]: I0126 14:38:27.393828 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4772ac3-d441-467c-9f50-161ac1604fc9-host\") pod \"crc-debug-75gst\" (UID: \"e4772ac3-d441-467c-9f50-161ac1604fc9\") " pod="openshift-must-gather-cwwwg/crc-debug-75gst" Jan 26 14:38:27 crc kubenswrapper[4844]: I0126 14:38:27.393965 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz9xl\" (UniqueName: \"kubernetes.io/projected/e4772ac3-d441-467c-9f50-161ac1604fc9-kube-api-access-rz9xl\") pod \"crc-debug-75gst\" (UID: \"e4772ac3-d441-467c-9f50-161ac1604fc9\") " pod="openshift-must-gather-cwwwg/crc-debug-75gst" Jan 26 14:38:27 crc kubenswrapper[4844]: I0126 14:38:27.394007 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4772ac3-d441-467c-9f50-161ac1604fc9-host\") pod \"crc-debug-75gst\" (UID: \"e4772ac3-d441-467c-9f50-161ac1604fc9\") " pod="openshift-must-gather-cwwwg/crc-debug-75gst" Jan 26 14:38:27 crc kubenswrapper[4844]: I0126 14:38:27.413201 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz9xl\" (UniqueName: \"kubernetes.io/projected/e4772ac3-d441-467c-9f50-161ac1604fc9-kube-api-access-rz9xl\") pod \"crc-debug-75gst\" (UID: \"e4772ac3-d441-467c-9f50-161ac1604fc9\") " pod="openshift-must-gather-cwwwg/crc-debug-75gst" Jan 26 14:38:27 crc kubenswrapper[4844]: I0126 14:38:27.513760 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cwwwg/crc-debug-75gst" Jan 26 14:38:27 crc kubenswrapper[4844]: W0126 14:38:27.545336 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4772ac3_d441_467c_9f50_161ac1604fc9.slice/crio-9f351249d5925589dec09d2f3dc8c64f16074043cf9ec8b26711b2e721f338cd WatchSource:0}: Error finding container 9f351249d5925589dec09d2f3dc8c64f16074043cf9ec8b26711b2e721f338cd: Status 404 returned error can't find the container with id 9f351249d5925589dec09d2f3dc8c64f16074043cf9ec8b26711b2e721f338cd Jan 26 14:38:27 crc kubenswrapper[4844]: I0126 14:38:27.903925 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jzmk8" podUID="e27db865-6e34-4b87-8ff2-03d0554daae5" containerName="registry-server" probeResult="failure" output=< Jan 26 14:38:27 crc kubenswrapper[4844]: timeout: failed to connect service ":50051" within 1s Jan 26 14:38:27 crc kubenswrapper[4844]: > Jan 26 14:38:28 crc kubenswrapper[4844]: I0126 14:38:28.006005 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cwwwg/crc-debug-75gst" event={"ID":"e4772ac3-d441-467c-9f50-161ac1604fc9","Type":"ContainerStarted","Data":"9f351249d5925589dec09d2f3dc8c64f16074043cf9ec8b26711b2e721f338cd"} Jan 26 14:38:36 crc kubenswrapper[4844]: I0126 14:38:36.364538 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:38:36 crc kubenswrapper[4844]: I0126 14:38:36.365020 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:38:36 crc kubenswrapper[4844]: I0126 14:38:36.907214 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jzmk8" Jan 26 14:38:36 crc kubenswrapper[4844]: I0126 14:38:36.961272 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jzmk8" Jan 26 14:38:37 crc kubenswrapper[4844]: I0126 14:38:37.706132 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jzmk8"] Jan 26 14:38:38 crc kubenswrapper[4844]: I0126 14:38:38.120768 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jzmk8" podUID="e27db865-6e34-4b87-8ff2-03d0554daae5" containerName="registry-server" containerID="cri-o://9ff5afa5d821164119f1b2cb7ae674b3b8a4d45fb1f5f88297acd7eaf6db13b8" gracePeriod=2 Jan 26 14:38:38 crc kubenswrapper[4844]: I0126 14:38:38.644510 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzmk8" Jan 26 14:38:38 crc kubenswrapper[4844]: I0126 14:38:38.742436 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5sj8\" (UniqueName: \"kubernetes.io/projected/e27db865-6e34-4b87-8ff2-03d0554daae5-kube-api-access-h5sj8\") pod \"e27db865-6e34-4b87-8ff2-03d0554daae5\" (UID: \"e27db865-6e34-4b87-8ff2-03d0554daae5\") " Jan 26 14:38:38 crc kubenswrapper[4844]: I0126 14:38:38.742664 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27db865-6e34-4b87-8ff2-03d0554daae5-utilities\") pod \"e27db865-6e34-4b87-8ff2-03d0554daae5\" (UID: \"e27db865-6e34-4b87-8ff2-03d0554daae5\") " Jan 26 14:38:38 crc kubenswrapper[4844]: I0126 14:38:38.742826 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27db865-6e34-4b87-8ff2-03d0554daae5-catalog-content\") pod \"e27db865-6e34-4b87-8ff2-03d0554daae5\" (UID: \"e27db865-6e34-4b87-8ff2-03d0554daae5\") " Jan 26 14:38:38 crc kubenswrapper[4844]: I0126 14:38:38.744300 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e27db865-6e34-4b87-8ff2-03d0554daae5-utilities" (OuterVolumeSpecName: "utilities") pod "e27db865-6e34-4b87-8ff2-03d0554daae5" (UID: "e27db865-6e34-4b87-8ff2-03d0554daae5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:38:38 crc kubenswrapper[4844]: I0126 14:38:38.749314 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27db865-6e34-4b87-8ff2-03d0554daae5-kube-api-access-h5sj8" (OuterVolumeSpecName: "kube-api-access-h5sj8") pod "e27db865-6e34-4b87-8ff2-03d0554daae5" (UID: "e27db865-6e34-4b87-8ff2-03d0554daae5"). InnerVolumeSpecName "kube-api-access-h5sj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:38:38 crc kubenswrapper[4844]: I0126 14:38:38.840325 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e27db865-6e34-4b87-8ff2-03d0554daae5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e27db865-6e34-4b87-8ff2-03d0554daae5" (UID: "e27db865-6e34-4b87-8ff2-03d0554daae5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:38:38 crc kubenswrapper[4844]: I0126 14:38:38.844949 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5sj8\" (UniqueName: \"kubernetes.io/projected/e27db865-6e34-4b87-8ff2-03d0554daae5-kube-api-access-h5sj8\") on node \"crc\" DevicePath \"\"" Jan 26 14:38:38 crc kubenswrapper[4844]: I0126 14:38:38.844977 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27db865-6e34-4b87-8ff2-03d0554daae5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:38:38 crc kubenswrapper[4844]: I0126 14:38:38.844986 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27db865-6e34-4b87-8ff2-03d0554daae5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:38:39 crc kubenswrapper[4844]: I0126 14:38:39.133136 4844 generic.go:334] "Generic (PLEG): container finished" podID="e27db865-6e34-4b87-8ff2-03d0554daae5" containerID="9ff5afa5d821164119f1b2cb7ae674b3b8a4d45fb1f5f88297acd7eaf6db13b8" exitCode=0 Jan 26 14:38:39 crc kubenswrapper[4844]: I0126 14:38:39.133216 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzmk8" event={"ID":"e27db865-6e34-4b87-8ff2-03d0554daae5","Type":"ContainerDied","Data":"9ff5afa5d821164119f1b2cb7ae674b3b8a4d45fb1f5f88297acd7eaf6db13b8"} Jan 26 14:38:39 crc kubenswrapper[4844]: I0126 14:38:39.133514 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzmk8" event={"ID":"e27db865-6e34-4b87-8ff2-03d0554daae5","Type":"ContainerDied","Data":"c79ca2977eae04b6be2b7c4dad968d03679cbada79c3b5c9ab4e644dea54395d"} Jan 26 14:38:39 crc kubenswrapper[4844]: I0126 14:38:39.133249 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzmk8" Jan 26 14:38:39 crc kubenswrapper[4844]: I0126 14:38:39.133545 4844 scope.go:117] "RemoveContainer" containerID="9ff5afa5d821164119f1b2cb7ae674b3b8a4d45fb1f5f88297acd7eaf6db13b8" Jan 26 14:38:39 crc kubenswrapper[4844]: I0126 14:38:39.137999 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cwwwg/crc-debug-75gst" event={"ID":"e4772ac3-d441-467c-9f50-161ac1604fc9","Type":"ContainerStarted","Data":"f09b14eab4abf34efec4429cc8d2f18629a17ec34d81e6fa6a6dbab439131a23"} Jan 26 14:38:39 crc kubenswrapper[4844]: I0126 14:38:39.163783 4844 scope.go:117] "RemoveContainer" containerID="7aba1a6cbcb05b849a2fe6fd4c30e61dc6ff92469b0bc440d3a70b6e2b1a9b17" Jan 26 14:38:39 crc kubenswrapper[4844]: I0126 14:38:39.168959 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cwwwg/crc-debug-75gst" podStartSLOduration=1.442673041 podStartE2EDuration="12.168939051s" podCreationTimestamp="2026-01-26 14:38:27 +0000 UTC" firstStartedPulling="2026-01-26 14:38:27.54791111 +0000 UTC m=+6884.481278722" lastFinishedPulling="2026-01-26 14:38:38.27417712 +0000 UTC m=+6895.207544732" observedRunningTime="2026-01-26 14:38:39.156941369 +0000 UTC m=+6896.090308991" watchObservedRunningTime="2026-01-26 14:38:39.168939051 +0000 UTC m=+6896.102306673" Jan 26 14:38:39 crc kubenswrapper[4844]: I0126 14:38:39.186478 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jzmk8"] Jan 26 14:38:39 crc kubenswrapper[4844]: I0126 14:38:39.196311 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jzmk8"] Jan 26 14:38:39 crc kubenswrapper[4844]: I0126 14:38:39.197226 4844 scope.go:117] "RemoveContainer" containerID="121610d4d32f4c2251b256dc2ec431f4f7d61d81a6f1de57264074cc112283bf" Jan 26 14:38:39 crc kubenswrapper[4844]: I0126 14:38:39.219713 4844 scope.go:117] "RemoveContainer" containerID="9ff5afa5d821164119f1b2cb7ae674b3b8a4d45fb1f5f88297acd7eaf6db13b8" Jan 26 14:38:39 crc kubenswrapper[4844]: E0126 14:38:39.220251 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ff5afa5d821164119f1b2cb7ae674b3b8a4d45fb1f5f88297acd7eaf6db13b8\": container with ID starting with 9ff5afa5d821164119f1b2cb7ae674b3b8a4d45fb1f5f88297acd7eaf6db13b8 not found: ID does not exist" containerID="9ff5afa5d821164119f1b2cb7ae674b3b8a4d45fb1f5f88297acd7eaf6db13b8" Jan 26 14:38:39 crc kubenswrapper[4844]: I0126 14:38:39.220328 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ff5afa5d821164119f1b2cb7ae674b3b8a4d45fb1f5f88297acd7eaf6db13b8"} err="failed to get container status \"9ff5afa5d821164119f1b2cb7ae674b3b8a4d45fb1f5f88297acd7eaf6db13b8\": rpc error: code = NotFound desc = could not find container \"9ff5afa5d821164119f1b2cb7ae674b3b8a4d45fb1f5f88297acd7eaf6db13b8\": container with ID starting with 9ff5afa5d821164119f1b2cb7ae674b3b8a4d45fb1f5f88297acd7eaf6db13b8 not found: ID does not exist" Jan 26 14:38:39 crc kubenswrapper[4844]: I0126 14:38:39.220387 4844 scope.go:117] "RemoveContainer" containerID="7aba1a6cbcb05b849a2fe6fd4c30e61dc6ff92469b0bc440d3a70b6e2b1a9b17" Jan 26 14:38:39 crc kubenswrapper[4844]: E0126 14:38:39.220827 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aba1a6cbcb05b849a2fe6fd4c30e61dc6ff92469b0bc440d3a70b6e2b1a9b17\": container with ID starting with 7aba1a6cbcb05b849a2fe6fd4c30e61dc6ff92469b0bc440d3a70b6e2b1a9b17 not found: ID does not exist" containerID="7aba1a6cbcb05b849a2fe6fd4c30e61dc6ff92469b0bc440d3a70b6e2b1a9b17" Jan 26 14:38:39 crc kubenswrapper[4844]: I0126 14:38:39.220878 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aba1a6cbcb05b849a2fe6fd4c30e61dc6ff92469b0bc440d3a70b6e2b1a9b17"} err="failed to get container status \"7aba1a6cbcb05b849a2fe6fd4c30e61dc6ff92469b0bc440d3a70b6e2b1a9b17\": rpc error: code = NotFound desc = could not find container \"7aba1a6cbcb05b849a2fe6fd4c30e61dc6ff92469b0bc440d3a70b6e2b1a9b17\": container with ID starting with 7aba1a6cbcb05b849a2fe6fd4c30e61dc6ff92469b0bc440d3a70b6e2b1a9b17 not found: ID does not exist" Jan 26 14:38:39 crc kubenswrapper[4844]: I0126 14:38:39.220898 4844 scope.go:117] "RemoveContainer" containerID="121610d4d32f4c2251b256dc2ec431f4f7d61d81a6f1de57264074cc112283bf" Jan 26 14:38:39 crc kubenswrapper[4844]: E0126 14:38:39.221203 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"121610d4d32f4c2251b256dc2ec431f4f7d61d81a6f1de57264074cc112283bf\": container with ID starting with 121610d4d32f4c2251b256dc2ec431f4f7d61d81a6f1de57264074cc112283bf not found: ID does not exist" containerID="121610d4d32f4c2251b256dc2ec431f4f7d61d81a6f1de57264074cc112283bf" Jan 26 14:38:39 crc kubenswrapper[4844]: I0126 14:38:39.221235 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"121610d4d32f4c2251b256dc2ec431f4f7d61d81a6f1de57264074cc112283bf"} err="failed to get container status \"121610d4d32f4c2251b256dc2ec431f4f7d61d81a6f1de57264074cc112283bf\": rpc error: code = NotFound desc = could not find container \"121610d4d32f4c2251b256dc2ec431f4f7d61d81a6f1de57264074cc112283bf\": container with ID starting with 121610d4d32f4c2251b256dc2ec431f4f7d61d81a6f1de57264074cc112283bf not found: ID does not exist" Jan 26 14:38:39 crc kubenswrapper[4844]: I0126 14:38:39.324836 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e27db865-6e34-4b87-8ff2-03d0554daae5" path="/var/lib/kubelet/pods/e27db865-6e34-4b87-8ff2-03d0554daae5/volumes" Jan 26 14:39:06 crc kubenswrapper[4844]: I0126 14:39:06.364539 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:39:06 crc kubenswrapper[4844]: I0126 14:39:06.365084 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:39:06 crc kubenswrapper[4844]: I0126 14:39:06.365124 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 14:39:06 crc kubenswrapper[4844]: I0126 14:39:06.365832 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b662f3876628db4e3e14d2a4b83b69e591a54d9e073c177db60f5cee583d50b"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:39:06 crc kubenswrapper[4844]: I0126 14:39:06.365882 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://1b662f3876628db4e3e14d2a4b83b69e591a54d9e073c177db60f5cee583d50b" gracePeriod=600 Jan 26 14:39:07 crc kubenswrapper[4844]: I0126 14:39:07.434290 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="1b662f3876628db4e3e14d2a4b83b69e591a54d9e073c177db60f5cee583d50b" exitCode=0 Jan 26 14:39:07 crc kubenswrapper[4844]: I0126 14:39:07.434363 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"1b662f3876628db4e3e14d2a4b83b69e591a54d9e073c177db60f5cee583d50b"} Jan 26 14:39:07 crc kubenswrapper[4844]: I0126 14:39:07.434936 4844 scope.go:117] "RemoveContainer" containerID="288f3f15bd87ac8ddd2065e9e06186b3be457a3cedc257d2058cbbabe4ee3e74" Jan 26 14:39:08 crc kubenswrapper[4844]: I0126 14:39:08.444018 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a"} Jan 26 14:39:30 crc kubenswrapper[4844]: I0126 14:39:30.671000 4844 generic.go:334] "Generic (PLEG): container finished" podID="e4772ac3-d441-467c-9f50-161ac1604fc9" containerID="f09b14eab4abf34efec4429cc8d2f18629a17ec34d81e6fa6a6dbab439131a23" exitCode=0 Jan 26 14:39:30 crc kubenswrapper[4844]: I0126 14:39:30.671139 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cwwwg/crc-debug-75gst" event={"ID":"e4772ac3-d441-467c-9f50-161ac1604fc9","Type":"ContainerDied","Data":"f09b14eab4abf34efec4429cc8d2f18629a17ec34d81e6fa6a6dbab439131a23"} Jan 26 14:39:31 crc kubenswrapper[4844]: I0126 14:39:31.817565 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cwwwg/crc-debug-75gst" Jan 26 14:39:31 crc kubenswrapper[4844]: I0126 14:39:31.853278 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cwwwg/crc-debug-75gst"] Jan 26 14:39:31 crc kubenswrapper[4844]: I0126 14:39:31.861585 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cwwwg/crc-debug-75gst"] Jan 26 14:39:31 crc kubenswrapper[4844]: I0126 14:39:31.950726 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4772ac3-d441-467c-9f50-161ac1604fc9-host\") pod \"e4772ac3-d441-467c-9f50-161ac1604fc9\" (UID: \"e4772ac3-d441-467c-9f50-161ac1604fc9\") " Jan 26 14:39:31 crc kubenswrapper[4844]: I0126 14:39:31.950861 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4772ac3-d441-467c-9f50-161ac1604fc9-host" (OuterVolumeSpecName: "host") pod "e4772ac3-d441-467c-9f50-161ac1604fc9" (UID: "e4772ac3-d441-467c-9f50-161ac1604fc9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:39:31 crc kubenswrapper[4844]: I0126 14:39:31.950888 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz9xl\" (UniqueName: \"kubernetes.io/projected/e4772ac3-d441-467c-9f50-161ac1604fc9-kube-api-access-rz9xl\") pod \"e4772ac3-d441-467c-9f50-161ac1604fc9\" (UID: \"e4772ac3-d441-467c-9f50-161ac1604fc9\") " Jan 26 14:39:31 crc kubenswrapper[4844]: I0126 14:39:31.951387 4844 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4772ac3-d441-467c-9f50-161ac1604fc9-host\") on node \"crc\" DevicePath \"\"" Jan 26 14:39:31 crc kubenswrapper[4844]: I0126 14:39:31.957414 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4772ac3-d441-467c-9f50-161ac1604fc9-kube-api-access-rz9xl" (OuterVolumeSpecName: "kube-api-access-rz9xl") pod "e4772ac3-d441-467c-9f50-161ac1604fc9" (UID: "e4772ac3-d441-467c-9f50-161ac1604fc9"). InnerVolumeSpecName "kube-api-access-rz9xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:39:32 crc kubenswrapper[4844]: I0126 14:39:32.053300 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz9xl\" (UniqueName: \"kubernetes.io/projected/e4772ac3-d441-467c-9f50-161ac1604fc9-kube-api-access-rz9xl\") on node \"crc\" DevicePath \"\"" Jan 26 14:39:32 crc kubenswrapper[4844]: I0126 14:39:32.693092 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f351249d5925589dec09d2f3dc8c64f16074043cf9ec8b26711b2e721f338cd" Jan 26 14:39:32 crc kubenswrapper[4844]: I0126 14:39:32.693112 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cwwwg/crc-debug-75gst" Jan 26 14:39:33 crc kubenswrapper[4844]: I0126 14:39:33.078552 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cwwwg/crc-debug-rfqqz"] Jan 26 14:39:33 crc kubenswrapper[4844]: E0126 14:39:33.079166 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27db865-6e34-4b87-8ff2-03d0554daae5" containerName="extract-content" Jan 26 14:39:33 crc kubenswrapper[4844]: I0126 14:39:33.079188 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27db865-6e34-4b87-8ff2-03d0554daae5" containerName="extract-content" Jan 26 14:39:33 crc kubenswrapper[4844]: E0126 14:39:33.079216 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4772ac3-d441-467c-9f50-161ac1604fc9" containerName="container-00" Jan 26 14:39:33 crc kubenswrapper[4844]: I0126 14:39:33.079228 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4772ac3-d441-467c-9f50-161ac1604fc9" containerName="container-00" Jan 26 14:39:33 crc kubenswrapper[4844]: E0126 14:39:33.079251 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27db865-6e34-4b87-8ff2-03d0554daae5" containerName="extract-utilities" Jan 26 14:39:33 crc kubenswrapper[4844]: I0126 14:39:33.079264 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27db865-6e34-4b87-8ff2-03d0554daae5" containerName="extract-utilities" Jan 26 14:39:33 crc kubenswrapper[4844]: E0126 14:39:33.079306 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27db865-6e34-4b87-8ff2-03d0554daae5" containerName="registry-server" Jan 26 14:39:33 crc kubenswrapper[4844]: I0126 14:39:33.079316 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27db865-6e34-4b87-8ff2-03d0554daae5" containerName="registry-server" Jan 26 14:39:33 crc kubenswrapper[4844]: I0126 14:39:33.079681 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4772ac3-d441-467c-9f50-161ac1604fc9" containerName="container-00" Jan 26 14:39:33 crc kubenswrapper[4844]: I0126 14:39:33.079716 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27db865-6e34-4b87-8ff2-03d0554daae5" containerName="registry-server" Jan 26 14:39:33 crc kubenswrapper[4844]: I0126 14:39:33.080765 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cwwwg/crc-debug-rfqqz" Jan 26 14:39:33 crc kubenswrapper[4844]: I0126 14:39:33.177675 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e840abbe-c808-40cf-ad96-7cdfdb256d86-host\") pod \"crc-debug-rfqqz\" (UID: \"e840abbe-c808-40cf-ad96-7cdfdb256d86\") " pod="openshift-must-gather-cwwwg/crc-debug-rfqqz" Jan 26 14:39:33 crc kubenswrapper[4844]: I0126 14:39:33.178051 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpb2l\" (UniqueName: \"kubernetes.io/projected/e840abbe-c808-40cf-ad96-7cdfdb256d86-kube-api-access-xpb2l\") pod \"crc-debug-rfqqz\" (UID: \"e840abbe-c808-40cf-ad96-7cdfdb256d86\") " pod="openshift-must-gather-cwwwg/crc-debug-rfqqz" Jan 26 14:39:33 crc kubenswrapper[4844]: I0126 14:39:33.280315 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e840abbe-c808-40cf-ad96-7cdfdb256d86-host\") pod \"crc-debug-rfqqz\" (UID: \"e840abbe-c808-40cf-ad96-7cdfdb256d86\") " pod="openshift-must-gather-cwwwg/crc-debug-rfqqz" Jan 26 14:39:33 crc kubenswrapper[4844]: I0126 14:39:33.280720 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpb2l\" (UniqueName: \"kubernetes.io/projected/e840abbe-c808-40cf-ad96-7cdfdb256d86-kube-api-access-xpb2l\") pod \"crc-debug-rfqqz\" (UID: \"e840abbe-c808-40cf-ad96-7cdfdb256d86\") " pod="openshift-must-gather-cwwwg/crc-debug-rfqqz" Jan 26 14:39:33 crc kubenswrapper[4844]: I0126 14:39:33.280466 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e840abbe-c808-40cf-ad96-7cdfdb256d86-host\") pod \"crc-debug-rfqqz\" (UID: \"e840abbe-c808-40cf-ad96-7cdfdb256d86\") " pod="openshift-must-gather-cwwwg/crc-debug-rfqqz" Jan 26 14:39:33 crc kubenswrapper[4844]: I0126 14:39:33.308121 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpb2l\" (UniqueName: \"kubernetes.io/projected/e840abbe-c808-40cf-ad96-7cdfdb256d86-kube-api-access-xpb2l\") pod \"crc-debug-rfqqz\" (UID: \"e840abbe-c808-40cf-ad96-7cdfdb256d86\") " pod="openshift-must-gather-cwwwg/crc-debug-rfqqz" Jan 26 14:39:33 crc kubenswrapper[4844]: I0126 14:39:33.326457 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4772ac3-d441-467c-9f50-161ac1604fc9" path="/var/lib/kubelet/pods/e4772ac3-d441-467c-9f50-161ac1604fc9/volumes" Jan 26 14:39:33 crc kubenswrapper[4844]: I0126 14:39:33.401117 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cwwwg/crc-debug-rfqqz" Jan 26 14:39:33 crc kubenswrapper[4844]: I0126 14:39:33.704845 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cwwwg/crc-debug-rfqqz" event={"ID":"e840abbe-c808-40cf-ad96-7cdfdb256d86","Type":"ContainerStarted","Data":"a0c6e92d4312df0b2b9ace06991a7e10255cc6459b37180c5fc2d779fa2b3db5"} Jan 26 14:39:34 crc kubenswrapper[4844]: I0126 14:39:34.717737 4844 generic.go:334] "Generic (PLEG): container finished" podID="e840abbe-c808-40cf-ad96-7cdfdb256d86" containerID="90a6b279aa5e19da593518440b2b3ac34fe08ef95d1b389c379e2c1ef94a8bc0" exitCode=0 Jan 26 14:39:34 crc kubenswrapper[4844]: I0126 14:39:34.717816 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cwwwg/crc-debug-rfqqz" event={"ID":"e840abbe-c808-40cf-ad96-7cdfdb256d86","Type":"ContainerDied","Data":"90a6b279aa5e19da593518440b2b3ac34fe08ef95d1b389c379e2c1ef94a8bc0"} Jan 26 14:39:35 crc kubenswrapper[4844]: I0126 14:39:35.838967 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cwwwg/crc-debug-rfqqz" Jan 26 14:39:35 crc kubenswrapper[4844]: I0126 14:39:35.932817 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpb2l\" (UniqueName: \"kubernetes.io/projected/e840abbe-c808-40cf-ad96-7cdfdb256d86-kube-api-access-xpb2l\") pod \"e840abbe-c808-40cf-ad96-7cdfdb256d86\" (UID: \"e840abbe-c808-40cf-ad96-7cdfdb256d86\") " Jan 26 14:39:35 crc kubenswrapper[4844]: I0126 14:39:35.933067 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e840abbe-c808-40cf-ad96-7cdfdb256d86-host\") pod \"e840abbe-c808-40cf-ad96-7cdfdb256d86\" (UID: \"e840abbe-c808-40cf-ad96-7cdfdb256d86\") " Jan 26 14:39:35 crc kubenswrapper[4844]: I0126 14:39:35.933228 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e840abbe-c808-40cf-ad96-7cdfdb256d86-host" (OuterVolumeSpecName: "host") pod "e840abbe-c808-40cf-ad96-7cdfdb256d86" (UID: "e840abbe-c808-40cf-ad96-7cdfdb256d86"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:39:35 crc kubenswrapper[4844]: I0126 14:39:35.933554 4844 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e840abbe-c808-40cf-ad96-7cdfdb256d86-host\") on node \"crc\" DevicePath \"\"" Jan 26 14:39:35 crc kubenswrapper[4844]: I0126 14:39:35.938135 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e840abbe-c808-40cf-ad96-7cdfdb256d86-kube-api-access-xpb2l" (OuterVolumeSpecName: "kube-api-access-xpb2l") pod "e840abbe-c808-40cf-ad96-7cdfdb256d86" (UID: "e840abbe-c808-40cf-ad96-7cdfdb256d86"). InnerVolumeSpecName "kube-api-access-xpb2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:39:36 crc kubenswrapper[4844]: I0126 14:39:36.034903 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpb2l\" (UniqueName: \"kubernetes.io/projected/e840abbe-c808-40cf-ad96-7cdfdb256d86-kube-api-access-xpb2l\") on node \"crc\" DevicePath \"\"" Jan 26 14:39:36 crc kubenswrapper[4844]: I0126 14:39:36.733774 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cwwwg/crc-debug-rfqqz" event={"ID":"e840abbe-c808-40cf-ad96-7cdfdb256d86","Type":"ContainerDied","Data":"a0c6e92d4312df0b2b9ace06991a7e10255cc6459b37180c5fc2d779fa2b3db5"} Jan 26 14:39:36 crc kubenswrapper[4844]: I0126 14:39:36.734008 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0c6e92d4312df0b2b9ace06991a7e10255cc6459b37180c5fc2d779fa2b3db5" Jan 26 14:39:36 crc kubenswrapper[4844]: I0126 14:39:36.733845 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cwwwg/crc-debug-rfqqz" Jan 26 14:39:36 crc kubenswrapper[4844]: I0126 14:39:36.783701 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cwwwg/crc-debug-rfqqz"] Jan 26 14:39:36 crc kubenswrapper[4844]: I0126 14:39:36.825183 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cwwwg/crc-debug-rfqqz"] Jan 26 14:39:37 crc kubenswrapper[4844]: I0126 14:39:37.326176 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e840abbe-c808-40cf-ad96-7cdfdb256d86" path="/var/lib/kubelet/pods/e840abbe-c808-40cf-ad96-7cdfdb256d86/volumes" Jan 26 14:39:37 crc kubenswrapper[4844]: I0126 14:39:37.990012 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cwwwg/crc-debug-dmv2h"] Jan 26 14:39:37 crc kubenswrapper[4844]: E0126 14:39:37.990767 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e840abbe-c808-40cf-ad96-7cdfdb256d86" containerName="container-00" Jan 26 14:39:37 crc kubenswrapper[4844]: I0126 14:39:37.990789 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="e840abbe-c808-40cf-ad96-7cdfdb256d86" containerName="container-00" Jan 26 14:39:37 crc kubenswrapper[4844]: I0126 14:39:37.991054 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="e840abbe-c808-40cf-ad96-7cdfdb256d86" containerName="container-00" Jan 26 14:39:37 crc kubenswrapper[4844]: I0126 14:39:37.991855 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cwwwg/crc-debug-dmv2h" Jan 26 14:39:38 crc kubenswrapper[4844]: I0126 14:39:38.076702 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t4nf\" (UniqueName: \"kubernetes.io/projected/2a7f7221-0fba-4c0a-9a2d-f9240935546e-kube-api-access-6t4nf\") pod \"crc-debug-dmv2h\" (UID: \"2a7f7221-0fba-4c0a-9a2d-f9240935546e\") " pod="openshift-must-gather-cwwwg/crc-debug-dmv2h" Jan 26 14:39:38 crc kubenswrapper[4844]: I0126 14:39:38.076960 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2a7f7221-0fba-4c0a-9a2d-f9240935546e-host\") pod \"crc-debug-dmv2h\" (UID: \"2a7f7221-0fba-4c0a-9a2d-f9240935546e\") " pod="openshift-must-gather-cwwwg/crc-debug-dmv2h" Jan 26 14:39:38 crc kubenswrapper[4844]: I0126 14:39:38.178985 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2a7f7221-0fba-4c0a-9a2d-f9240935546e-host\") pod \"crc-debug-dmv2h\" (UID: \"2a7f7221-0fba-4c0a-9a2d-f9240935546e\") " pod="openshift-must-gather-cwwwg/crc-debug-dmv2h" Jan 26 14:39:38 crc kubenswrapper[4844]: I0126 14:39:38.179091 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t4nf\" (UniqueName: \"kubernetes.io/projected/2a7f7221-0fba-4c0a-9a2d-f9240935546e-kube-api-access-6t4nf\") pod \"crc-debug-dmv2h\" (UID: \"2a7f7221-0fba-4c0a-9a2d-f9240935546e\") " pod="openshift-must-gather-cwwwg/crc-debug-dmv2h" Jan 26 14:39:38 crc kubenswrapper[4844]: I0126 14:39:38.179098 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2a7f7221-0fba-4c0a-9a2d-f9240935546e-host\") pod \"crc-debug-dmv2h\" (UID: \"2a7f7221-0fba-4c0a-9a2d-f9240935546e\") " pod="openshift-must-gather-cwwwg/crc-debug-dmv2h" Jan 26 14:39:38 crc kubenswrapper[4844]: I0126 14:39:38.200149 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t4nf\" (UniqueName: \"kubernetes.io/projected/2a7f7221-0fba-4c0a-9a2d-f9240935546e-kube-api-access-6t4nf\") pod \"crc-debug-dmv2h\" (UID: \"2a7f7221-0fba-4c0a-9a2d-f9240935546e\") " pod="openshift-must-gather-cwwwg/crc-debug-dmv2h" Jan 26 14:39:38 crc kubenswrapper[4844]: I0126 14:39:38.308575 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cwwwg/crc-debug-dmv2h" Jan 26 14:39:38 crc kubenswrapper[4844]: W0126 14:39:38.346065 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a7f7221_0fba_4c0a_9a2d_f9240935546e.slice/crio-0ab0bf47dbaf421044bbc08d583d2032b4c1c8dbc840913488a0c882320898b5 WatchSource:0}: Error finding container 0ab0bf47dbaf421044bbc08d583d2032b4c1c8dbc840913488a0c882320898b5: Status 404 returned error can't find the container with id 0ab0bf47dbaf421044bbc08d583d2032b4c1c8dbc840913488a0c882320898b5 Jan 26 14:39:38 crc kubenswrapper[4844]: I0126 14:39:38.761494 4844 generic.go:334] "Generic (PLEG): container finished" podID="2a7f7221-0fba-4c0a-9a2d-f9240935546e" containerID="22a1d5e0ece99973adc82fd4488b546a2fdeac75743dd1c7247b62e1918ace16" exitCode=0 Jan 26 14:39:38 crc kubenswrapper[4844]: I0126 14:39:38.761574 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cwwwg/crc-debug-dmv2h" event={"ID":"2a7f7221-0fba-4c0a-9a2d-f9240935546e","Type":"ContainerDied","Data":"22a1d5e0ece99973adc82fd4488b546a2fdeac75743dd1c7247b62e1918ace16"} Jan 26 14:39:38 crc kubenswrapper[4844]: I0126 14:39:38.761796 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cwwwg/crc-debug-dmv2h" event={"ID":"2a7f7221-0fba-4c0a-9a2d-f9240935546e","Type":"ContainerStarted","Data":"0ab0bf47dbaf421044bbc08d583d2032b4c1c8dbc840913488a0c882320898b5"} Jan 26 14:39:38 crc kubenswrapper[4844]: I0126 14:39:38.951301 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cwwwg/crc-debug-dmv2h"] Jan 26 14:39:38 crc kubenswrapper[4844]: I0126 14:39:38.959315 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cwwwg/crc-debug-dmv2h"] Jan 26 14:39:39 crc kubenswrapper[4844]: I0126 14:39:39.879422 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cwwwg/crc-debug-dmv2h" Jan 26 14:39:40 crc kubenswrapper[4844]: I0126 14:39:40.013736 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2a7f7221-0fba-4c0a-9a2d-f9240935546e-host\") pod \"2a7f7221-0fba-4c0a-9a2d-f9240935546e\" (UID: \"2a7f7221-0fba-4c0a-9a2d-f9240935546e\") " Jan 26 14:39:40 crc kubenswrapper[4844]: I0126 14:39:40.013834 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a7f7221-0fba-4c0a-9a2d-f9240935546e-host" (OuterVolumeSpecName: "host") pod "2a7f7221-0fba-4c0a-9a2d-f9240935546e" (UID: "2a7f7221-0fba-4c0a-9a2d-f9240935546e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:39:40 crc kubenswrapper[4844]: I0126 14:39:40.013847 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t4nf\" (UniqueName: \"kubernetes.io/projected/2a7f7221-0fba-4c0a-9a2d-f9240935546e-kube-api-access-6t4nf\") pod \"2a7f7221-0fba-4c0a-9a2d-f9240935546e\" (UID: \"2a7f7221-0fba-4c0a-9a2d-f9240935546e\") " Jan 26 14:39:40 crc kubenswrapper[4844]: I0126 14:39:40.014454 4844 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2a7f7221-0fba-4c0a-9a2d-f9240935546e-host\") on node \"crc\" DevicePath \"\"" Jan 26 14:39:40 crc kubenswrapper[4844]: I0126 14:39:40.019843 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a7f7221-0fba-4c0a-9a2d-f9240935546e-kube-api-access-6t4nf" (OuterVolumeSpecName: "kube-api-access-6t4nf") pod "2a7f7221-0fba-4c0a-9a2d-f9240935546e" (UID: "2a7f7221-0fba-4c0a-9a2d-f9240935546e"). InnerVolumeSpecName "kube-api-access-6t4nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:39:40 crc kubenswrapper[4844]: I0126 14:39:40.116020 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t4nf\" (UniqueName: \"kubernetes.io/projected/2a7f7221-0fba-4c0a-9a2d-f9240935546e-kube-api-access-6t4nf\") on node \"crc\" DevicePath \"\"" Jan 26 14:39:40 crc kubenswrapper[4844]: I0126 14:39:40.783130 4844 scope.go:117] "RemoveContainer" containerID="22a1d5e0ece99973adc82fd4488b546a2fdeac75743dd1c7247b62e1918ace16" Jan 26 14:39:40 crc kubenswrapper[4844]: I0126 14:39:40.783174 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cwwwg/crc-debug-dmv2h" Jan 26 14:39:41 crc kubenswrapper[4844]: I0126 14:39:41.330891 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a7f7221-0fba-4c0a-9a2d-f9240935546e" path="/var/lib/kubelet/pods/2a7f7221-0fba-4c0a-9a2d-f9240935546e/volumes" Jan 26 14:40:05 crc kubenswrapper[4844]: I0126 14:40:05.834705 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-58b8c47bc6-5s5z9_7f2cf574-1917-4f2b-adba-02bcf6cb4dc8/barbican-api/0.log" Jan 26 14:40:05 crc kubenswrapper[4844]: I0126 14:40:05.948761 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-58b8c47bc6-5s5z9_7f2cf574-1917-4f2b-adba-02bcf6cb4dc8/barbican-api-log/0.log" Jan 26 14:40:06 crc kubenswrapper[4844]: I0126 14:40:06.057988 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-688b4ff97d-t5mvg_56958656-f467-485d-a3b6-9ecacb7edfeb/barbican-keystone-listener/0.log" Jan 26 14:40:06 crc kubenswrapper[4844]: I0126 14:40:06.148001 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-688b4ff97d-t5mvg_56958656-f467-485d-a3b6-9ecacb7edfeb/barbican-keystone-listener-log/0.log" Jan 26 14:40:06 crc kubenswrapper[4844]: I0126 14:40:06.224007 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5757498f95-q5d7h_f64e9d9a-09d6-4843-a829-d4fbdcaadb65/barbican-worker/0.log" Jan 26 14:40:06 crc kubenswrapper[4844]: I0126 14:40:06.310934 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5757498f95-q5d7h_f64e9d9a-09d6-4843-a829-d4fbdcaadb65/barbican-worker-log/0.log" Jan 26 14:40:06 crc kubenswrapper[4844]: I0126 14:40:06.374530 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-88p79_c1079155-3798-4f39-ab56-dffea2038df8/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:40:06 crc kubenswrapper[4844]: I0126 14:40:06.644442 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fb03b4d3-5582-4758-a585-5f8e82a306da/ceilometer-notification-agent/0.log" Jan 26 14:40:06 crc kubenswrapper[4844]: I0126 14:40:06.652590 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fb03b4d3-5582-4758-a585-5f8e82a306da/ceilometer-central-agent/0.log" Jan 26 14:40:06 crc kubenswrapper[4844]: I0126 14:40:06.661682 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fb03b4d3-5582-4758-a585-5f8e82a306da/proxy-httpd/0.log" Jan 26 14:40:06 crc kubenswrapper[4844]: I0126 14:40:06.673878 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fb03b4d3-5582-4758-a585-5f8e82a306da/sg-core/0.log" Jan 26 14:40:06 crc kubenswrapper[4844]: I0126 14:40:06.927565 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a34d9864-c377-4ca1-a4fe-512bf9292130/cinder-api-log/0.log" Jan 26 14:40:07 crc kubenswrapper[4844]: I0126 14:40:07.129690 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_2da46443-17b2-425a-ad97-c2dcae16074b/probe/0.log" Jan 26 14:40:07 crc kubenswrapper[4844]: I0126 14:40:07.324543 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a34d9864-c377-4ca1-a4fe-512bf9292130/cinder-api/0.log" Jan 26 14:40:07 crc kubenswrapper[4844]: I0126 14:40:07.380971 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_2da46443-17b2-425a-ad97-c2dcae16074b/cinder-backup/0.log" Jan 26 14:40:07 crc kubenswrapper[4844]: I0126 14:40:07.448929 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_47c752dd-0b96-464c-9cb4-3251fc31556a/cinder-scheduler/0.log" Jan 26 14:40:07 crc kubenswrapper[4844]: I0126 14:40:07.472648 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_47c752dd-0b96-464c-9cb4-3251fc31556a/probe/0.log" Jan 26 14:40:07 crc kubenswrapper[4844]: I0126 14:40:07.678905 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_40715f48-d3b7-4cca-9f3d-cba20a94ed39/probe/0.log" Jan 26 14:40:07 crc kubenswrapper[4844]: I0126 14:40:07.737122 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_40715f48-d3b7-4cca-9f3d-cba20a94ed39/cinder-volume/0.log" Jan 26 14:40:07 crc kubenswrapper[4844]: I0126 14:40:07.901256 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_eacc0803-a775-4eb4-8f3a-a126716ddbb5/probe/0.log" Jan 26 14:40:07 crc kubenswrapper[4844]: I0126 14:40:07.976476 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh_174270d5-d84e-4b4c-8602-31e455da67db/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:40:08 crc kubenswrapper[4844]: I0126 14:40:08.066610 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_eacc0803-a775-4eb4-8f3a-a126716ddbb5/cinder-volume/0.log" Jan 26 14:40:08 crc kubenswrapper[4844]: I0126 14:40:08.182256 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt_d3c8b898-d97e-461f-85df-f33653e393f7/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:40:08 crc kubenswrapper[4844]: I0126 14:40:08.233511 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-86587fb56f-wskms_3ae83571-dfc8-4d58-bb40-b527756013e7/init/0.log" Jan 26 14:40:08 crc kubenswrapper[4844]: I0126 14:40:08.450100 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-86587fb56f-wskms_3ae83571-dfc8-4d58-bb40-b527756013e7/init/0.log" Jan 26 14:40:08 crc kubenswrapper[4844]: I0126 14:40:08.573286 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx_27022163-5166-48e2-afc4-e984baa40303/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:40:08 crc kubenswrapper[4844]: I0126 14:40:08.609739 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-86587fb56f-wskms_3ae83571-dfc8-4d58-bb40-b527756013e7/dnsmasq-dns/0.log" Jan 26 14:40:08 crc kubenswrapper[4844]: I0126 14:40:08.734463 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_65fceb02-1fd4-4b60-a767-f2d232539d43/glance-log/0.log" Jan 26 14:40:08 crc kubenswrapper[4844]: I0126 14:40:08.781893 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_65fceb02-1fd4-4b60-a767-f2d232539d43/glance-httpd/0.log" Jan 26 14:40:09 crc kubenswrapper[4844]: I0126 14:40:09.006693 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_403b5928-19b1-4dfd-97c9-75079d7de60e/glance-httpd/0.log" Jan 26 14:40:09 crc kubenswrapper[4844]: I0126 14:40:09.030955 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_403b5928-19b1-4dfd-97c9-75079d7de60e/glance-log/0.log" Jan 26 14:40:09 crc kubenswrapper[4844]: I0126 14:40:09.199911 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-77c8bf8786-w82f7_a0edac82-6db3-481f-8c9e-8826b5aac863/horizon/0.log" Jan 26 14:40:09 crc kubenswrapper[4844]: I0126 14:40:09.361017 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4_e7abb699-d024-4829-8882-7272c3313c67/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:40:09 crc kubenswrapper[4844]: I0126 14:40:09.573300 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-wvxxg_5ecdea0f-9b03-400a-a835-4f93cd02b1de/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:40:09 crc kubenswrapper[4844]: I0126 14:40:09.848649 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490601-dfzsv_9884c612-5868-41be-9d56-ad8f55bc68d6/keystone-cron/0.log" Jan 26 14:40:10 crc kubenswrapper[4844]: I0126 14:40:10.015357 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-77c8bf8786-w82f7_a0edac82-6db3-481f-8c9e-8826b5aac863/horizon-log/0.log" Jan 26 14:40:10 crc kubenswrapper[4844]: I0126 14:40:10.055773 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_0887ff47-06ad-4713-8a39-9cf1d0898a8d/kube-state-metrics/0.log" Jan 26 14:40:10 crc kubenswrapper[4844]: I0126 14:40:10.123461 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-sttdt_2d88214a-d4b9-4885-ac32-cae7c7dcd3ba/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:40:10 crc kubenswrapper[4844]: I0126 14:40:10.190540 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5db4cb7f67-85gvs_d2096862-de7b-4d51-aa62-bc55d339a9dc/keystone-api/0.log" Jan 26 14:40:10 crc kubenswrapper[4844]: I0126 14:40:10.641296 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5fcff84d65-flkjh_91acccd0-7b82-4ee7-afa7-549b7eeae8b6/neutron-httpd/0.log" Jan 26 14:40:10 crc kubenswrapper[4844]: I0126 14:40:10.656092 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4_38602c96-9d47-46f7-b299-c5bfc616ba99/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:40:10 crc kubenswrapper[4844]: I0126 14:40:10.723490 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5fcff84d65-flkjh_91acccd0-7b82-4ee7-afa7-549b7eeae8b6/neutron-api/0.log" Jan 26 14:40:11 crc kubenswrapper[4844]: I0126 14:40:11.261154 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_1aa738a6-8d60-4c39-aa86-dc27720dc883/nova-cell0-conductor-conductor/0.log" Jan 26 14:40:11 crc kubenswrapper[4844]: I0126 14:40:11.694766 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_dc7e97d6-1a33-4c98-87bb-6c4d451121b6/nova-cell1-conductor-conductor/0.log" Jan 26 14:40:12 crc kubenswrapper[4844]: I0126 14:40:12.017317 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7bcce5df-9655-46fe-8f82-5f226375500f/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 14:40:12 crc kubenswrapper[4844]: I0126 14:40:12.247230 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-2xrbw_421111b7-6358-404a-b57f-b6529eb910f9/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:40:12 crc kubenswrapper[4844]: I0126 14:40:12.313942 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_81ea8f8d-3955-4fc3-8e6b-412d0bec4995/nova-api-log/0.log" Jan 26 14:40:12 crc kubenswrapper[4844]: I0126 14:40:12.578212 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_86421d71-6636-4491-9b3e-7b4e3bf39ee9/nova-metadata-log/0.log" Jan 26 14:40:13 crc kubenswrapper[4844]: I0126 14:40:13.022572 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f80a52fc-df6a-4218-913e-2ee03174e341/mysql-bootstrap/0.log" Jan 26 14:40:13 crc kubenswrapper[4844]: I0126 14:40:13.125253 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_81ea8f8d-3955-4fc3-8e6b-412d0bec4995/nova-api-api/0.log" Jan 26 14:40:13 crc kubenswrapper[4844]: I0126 14:40:13.184659 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_42cc1780-3fb5-4158-95f2-5a1bd4e1161f/nova-scheduler-scheduler/0.log" Jan 26 14:40:13 crc kubenswrapper[4844]: I0126 14:40:13.231910 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f80a52fc-df6a-4218-913e-2ee03174e341/mysql-bootstrap/0.log" Jan 26 14:40:13 crc kubenswrapper[4844]: I0126 14:40:13.327408 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f80a52fc-df6a-4218-913e-2ee03174e341/galera/0.log" Jan 26 14:40:13 crc kubenswrapper[4844]: I0126 14:40:13.472841 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_7e22ff40-cacd-405d-98f5-f603b17b4e4a/mysql-bootstrap/0.log" Jan 26 14:40:13 crc kubenswrapper[4844]: I0126 14:40:13.605919 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_7e22ff40-cacd-405d-98f5-f603b17b4e4a/mysql-bootstrap/0.log" Jan 26 14:40:13 crc kubenswrapper[4844]: I0126 14:40:13.697458 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_7e22ff40-cacd-405d-98f5-f603b17b4e4a/galera/0.log" Jan 26 14:40:13 crc kubenswrapper[4844]: I0126 14:40:13.808835 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d831cf25-12e3-4375-88ae-4ce13c139248/openstackclient/0.log" Jan 26 14:40:14 crc kubenswrapper[4844]: I0126 14:40:14.023539 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wnqpc_77361a0b-a3eb-49da-971b-705eca5894eb/openstack-network-exporter/0.log" Jan 26 14:40:14 crc kubenswrapper[4844]: I0126 14:40:14.210771 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bq8zv_f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e/ovsdb-server-init/0.log" Jan 26 14:40:14 crc kubenswrapper[4844]: I0126 14:40:14.427187 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bq8zv_f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e/ovsdb-server/0.log" Jan 26 14:40:14 crc kubenswrapper[4844]: I0126 14:40:14.471678 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bq8zv_f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e/ovsdb-server-init/0.log" Jan 26 14:40:14 crc kubenswrapper[4844]: I0126 14:40:14.734133 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vnff8_6696649d-b30c-4ef9-beda-3cec75d656b4/ovn-controller/0.log" Jan 26 14:40:14 crc kubenswrapper[4844]: I0126 14:40:14.842635 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bq8zv_f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e/ovs-vswitchd/0.log" Jan 26 14:40:14 crc kubenswrapper[4844]: I0126 14:40:14.989523 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-svbzh_5161eb41-8d1f-405a-b40f-630aad7d1925/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:40:15 crc kubenswrapper[4844]: I0126 14:40:15.044091 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_86421d71-6636-4491-9b3e-7b4e3bf39ee9/nova-metadata-metadata/0.log" Jan 26 14:40:15 crc kubenswrapper[4844]: I0126 14:40:15.134282 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a0913fcd-1ca6-46f8-80a8-0c2ced36fea9/openstack-network-exporter/0.log" Jan 26 14:40:15 crc kubenswrapper[4844]: I0126 14:40:15.184156 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a0913fcd-1ca6-46f8-80a8-0c2ced36fea9/ovn-northd/0.log" Jan 26 14:40:15 crc kubenswrapper[4844]: I0126 14:40:15.284689 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_490e8905-58e4-44a6-a4a4-ea873a5eaa94/openstack-network-exporter/0.log" Jan 26 14:40:15 crc kubenswrapper[4844]: I0126 14:40:15.370620 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_490e8905-58e4-44a6-a4a4-ea873a5eaa94/ovsdbserver-nb/0.log" Jan 26 14:40:15 crc kubenswrapper[4844]: I0126 14:40:15.493079 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6b89a5fa-2181-432a-a613-6bbeeb0f56bb/openstack-network-exporter/0.log" Jan 26 14:40:15 crc kubenswrapper[4844]: I0126 14:40:15.521211 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6b89a5fa-2181-432a-a613-6bbeeb0f56bb/ovsdbserver-sb/0.log" Jan 26 14:40:15 crc kubenswrapper[4844]: I0126 14:40:15.842383 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fcca7d88-f1d4-463b-a412-ecfee5f8724d/init-config-reloader/0.log" Jan 26 14:40:15 crc kubenswrapper[4844]: I0126 14:40:15.866117 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7ff9fb4f5b-dz4mq_624dd95f-3ed5-4837-908b-b5e6d47a1edf/placement-api/0.log" Jan 26 14:40:15 crc kubenswrapper[4844]: I0126 14:40:15.927683 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7ff9fb4f5b-dz4mq_624dd95f-3ed5-4837-908b-b5e6d47a1edf/placement-log/0.log" Jan 26 14:40:16 crc kubenswrapper[4844]: I0126 14:40:16.021571 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fcca7d88-f1d4-463b-a412-ecfee5f8724d/init-config-reloader/0.log" Jan 26 14:40:16 crc kubenswrapper[4844]: I0126 14:40:16.086392 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fcca7d88-f1d4-463b-a412-ecfee5f8724d/config-reloader/0.log" Jan 26 14:40:16 crc kubenswrapper[4844]: I0126 14:40:16.093879 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fcca7d88-f1d4-463b-a412-ecfee5f8724d/prometheus/0.log" Jan 26 14:40:16 crc kubenswrapper[4844]: I0126 14:40:16.093921 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fcca7d88-f1d4-463b-a412-ecfee5f8724d/thanos-sidecar/0.log" Jan 26 14:40:16 crc kubenswrapper[4844]: I0126 14:40:16.257515 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_463d25b4-7819-4947-925d-74c429093694/setup-container/0.log" Jan 26 14:40:16 crc kubenswrapper[4844]: I0126 14:40:16.449339 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_463d25b4-7819-4947-925d-74c429093694/setup-container/0.log" Jan 26 14:40:16 crc kubenswrapper[4844]: I0126 14:40:16.512273 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_463d25b4-7819-4947-925d-74c429093694/rabbitmq/0.log" Jan 26 14:40:16 crc kubenswrapper[4844]: I0126 14:40:16.538825 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_185637e1-efed-452c-ba52-7688909bad2c/setup-container/0.log" Jan 26 14:40:16 crc kubenswrapper[4844]: I0126 14:40:16.776326 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_185637e1-efed-452c-ba52-7688909bad2c/rabbitmq/0.log" Jan 26 14:40:16 crc kubenswrapper[4844]: I0126 14:40:16.777911 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_185637e1-efed-452c-ba52-7688909bad2c/setup-container/0.log" Jan 26 14:40:16 crc kubenswrapper[4844]: I0126 14:40:16.910843 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_38e1fc4a-33a4-443e-95bb-3e653d3f1a59/setup-container/0.log" Jan 26 14:40:17 crc kubenswrapper[4844]: I0126 14:40:17.061029 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_38e1fc4a-33a4-443e-95bb-3e653d3f1a59/setup-container/0.log" Jan 26 14:40:17 crc kubenswrapper[4844]: I0126 14:40:17.112101 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_38e1fc4a-33a4-443e-95bb-3e653d3f1a59/rabbitmq/0.log" Jan 26 14:40:17 crc kubenswrapper[4844]: I0126 14:40:17.320315 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z_342e7682-6393-4c70-9c22-5108b5473dc0/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:40:17 crc kubenswrapper[4844]: I0126 14:40:17.345636 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4z6gd_e02f083a-8dcb-4454-8050-752c996dadd7/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:40:17 crc kubenswrapper[4844]: I0126 14:40:17.522012 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn_d135fda9-894e-41c5-94a3-57aca842c386/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:40:17 crc kubenswrapper[4844]: I0126 14:40:17.615614 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8qp5q_3ff365e7-065a-41e7-a3cc-642e66989dc9/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:40:17 crc kubenswrapper[4844]: I0126 14:40:17.811917 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-4fkj8_d45310a6-48b5-455c-960c-5aaaa0a5b469/ssh-known-hosts-edpm-deployment/0.log" Jan 26 14:40:18 crc kubenswrapper[4844]: I0126 14:40:18.050048 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5d969b7b55-l9p8p_e8e7e0c6-a150-4957-8e36-2f75d269e203/proxy-server/0.log" Jan 26 14:40:18 crc kubenswrapper[4844]: I0126 14:40:18.130829 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dh9kj_82fe3a1a-10c2-4378-a36b-b42131a2df4d/swift-ring-rebalance/0.log" Jan 26 14:40:18 crc kubenswrapper[4844]: I0126 14:40:18.145615 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5d969b7b55-l9p8p_e8e7e0c6-a150-4957-8e36-2f75d269e203/proxy-httpd/0.log" Jan 26 14:40:18 crc kubenswrapper[4844]: I0126 14:40:18.342591 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/account-auditor/0.log" Jan 26 14:40:18 crc kubenswrapper[4844]: I0126 14:40:18.402434 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/account-reaper/0.log" Jan 26 14:40:18 crc kubenswrapper[4844]: I0126 14:40:18.417686 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/account-replicator/0.log" Jan 26 14:40:18 crc kubenswrapper[4844]: I0126 14:40:18.532352 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/account-server/0.log" Jan 26 14:40:18 crc kubenswrapper[4844]: I0126 14:40:18.538634 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/container-auditor/0.log" Jan 26 14:40:18 crc kubenswrapper[4844]: I0126 14:40:18.643794 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/container-server/0.log" Jan 26 14:40:18 crc kubenswrapper[4844]: I0126 14:40:18.680792 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/container-replicator/0.log" Jan 26 14:40:18 crc kubenswrapper[4844]: I0126 14:40:18.745146 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/container-updater/0.log" Jan 26 14:40:18 crc kubenswrapper[4844]: I0126 14:40:18.793190 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/object-auditor/0.log" Jan 26 14:40:18 crc kubenswrapper[4844]: I0126 14:40:18.863890 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/object-expirer/0.log" Jan 26 14:40:18 crc kubenswrapper[4844]: I0126 14:40:18.903526 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/object-replicator/0.log" Jan 26 14:40:19 crc kubenswrapper[4844]: I0126 14:40:19.164975 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/object-server/0.log" Jan 26 14:40:19 crc kubenswrapper[4844]: I0126 14:40:19.251471 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/object-updater/0.log" Jan 26 14:40:19 crc kubenswrapper[4844]: I0126 14:40:19.303820 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/rsync/0.log" Jan 26 14:40:19 crc kubenswrapper[4844]: I0126 14:40:19.333158 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/swift-recon-cron/0.log" Jan 26 14:40:19 crc kubenswrapper[4844]: I0126 14:40:19.591809 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd_28d2f4e7-9d62-41ba-88db-fc0591ec6d43/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:40:19 crc kubenswrapper[4844]: I0126 14:40:19.691588 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_f617457c-8f1e-4508-926e-bb6b77ea7444/tempest-tests-tempest-tests-runner/0.log" Jan 26 14:40:19 crc kubenswrapper[4844]: I0126 14:40:19.781379 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_a4920a59-74e4-4ac3-b437-3dbd074758d7/test-operator-logs-container/0.log" Jan 26 14:40:19 crc kubenswrapper[4844]: I0126 14:40:19.957691 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-br56n_5a2f9b87-b8bf-456e-84a4-6e1736d30419/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:40:20 crc kubenswrapper[4844]: I0126 14:40:20.864277 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_75853a49-c21a-4df8-bcdf-0b160524e203/watcher-applier/0.log" Jan 26 14:40:21 crc kubenswrapper[4844]: I0126 14:40:21.238876 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_33ecc4c6-320a-41d8-a7c2-608bdda02b0a/watcher-api-log/0.log" Jan 26 14:40:24 crc kubenswrapper[4844]: I0126 14:40:24.429814 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea/watcher-decision-engine/0.log" Jan 26 14:40:25 crc kubenswrapper[4844]: I0126 14:40:25.583133 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_33ecc4c6-320a-41d8-a7c2-608bdda02b0a/watcher-api/0.log" Jan 26 14:40:27 crc kubenswrapper[4844]: I0126 14:40:27.021793 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f2bd5019-39c7-4b78-8610-4a7db01f5a85/memcached/0.log" Jan 26 14:40:50 crc kubenswrapper[4844]: I0126 14:40:50.696563 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq_22fcada7-92af-4edd-903e-8706cffecc6c/util/0.log" Jan 26 14:40:50 crc kubenswrapper[4844]: I0126 14:40:50.880228 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq_22fcada7-92af-4edd-903e-8706cffecc6c/util/0.log" Jan 26 14:40:50 crc kubenswrapper[4844]: I0126 14:40:50.935363 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq_22fcada7-92af-4edd-903e-8706cffecc6c/pull/0.log" Jan 26 14:40:50 crc kubenswrapper[4844]: I0126 14:40:50.950132 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq_22fcada7-92af-4edd-903e-8706cffecc6c/pull/0.log" Jan 26 14:40:51 crc kubenswrapper[4844]: I0126 14:40:51.080912 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq_22fcada7-92af-4edd-903e-8706cffecc6c/pull/0.log" Jan 26 14:40:51 crc kubenswrapper[4844]: I0126 14:40:51.108188 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq_22fcada7-92af-4edd-903e-8706cffecc6c/util/0.log" Jan 26 14:40:51 crc kubenswrapper[4844]: I0126 14:40:51.155480 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq_22fcada7-92af-4edd-903e-8706cffecc6c/extract/0.log" Jan 26 14:40:51 crc kubenswrapper[4844]: I0126 14:40:51.351946 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-5tq86_a29e2eac-c303-4ae6-9c3b-439a258ce420/manager/0.log" Jan 26 14:40:51 crc kubenswrapper[4844]: I0126 14:40:51.378458 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-sm4lj_aa463929-97db-4af2-8308-840d51ae717a/manager/0.log" Jan 26 14:40:51 crc kubenswrapper[4844]: I0126 14:40:51.498400 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-gmfsm_c39cee42-2147-463f-90f5-62b0ad31ec96/manager/0.log" Jan 26 14:40:51 crc kubenswrapper[4844]: I0126 14:40:51.620095 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-mwszm_f8b1471a-3483-4c9e-b662-02906d9b18c0/manager/0.log" Jan 26 14:40:51 crc kubenswrapper[4844]: I0126 14:40:51.733645 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-k8f6n_9de97e7e-c381-4f7d-9380-9aadf848b3a6/manager/0.log" Jan 26 14:40:51 crc kubenswrapper[4844]: I0126 14:40:51.857861 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-rk7rt_981956b6-e5c7-4908-a72d-458026f29e4d/manager/0.log" Jan 26 14:40:52 crc kubenswrapper[4844]: I0126 14:40:52.050344 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-krn66_1eca115f-b8cd-4a50-8adc-2d31e297657f/manager/0.log" Jan 26 14:40:52 crc kubenswrapper[4844]: I0126 14:40:52.260870 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-ht7r9_a60ef848-810d-4c2c-8c23-341d8168e7e7/manager/0.log" Jan 26 14:40:52 crc kubenswrapper[4844]: I0126 14:40:52.280247 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-vzncj_8b9f2639-4aaa-463a-b950-fc39fca31805/manager/0.log" Jan 26 14:40:52 crc kubenswrapper[4844]: I0126 14:40:52.347509 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-wtp6f_2a343b60-ecc4-4634-9a54-7814555dd3bc/manager/0.log" Jan 26 14:40:52 crc kubenswrapper[4844]: I0126 14:40:52.637735 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-bcdf4_154eb771-ca89-43f9-b002-e6f11d943cbe/manager/0.log" Jan 26 14:40:52 crc kubenswrapper[4844]: I0126 14:40:52.759752 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-pffmq_8ac12453-5418-4c50-8b2a-61dfad6bf1e1/manager/0.log" Jan 26 14:40:52 crc kubenswrapper[4844]: I0126 14:40:52.934357 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-x5shx_73721700-0f73-468c-9c69-2d3f078a7516/manager/0.log" Jan 26 14:40:53 crc kubenswrapper[4844]: I0126 14:40:53.001138 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-566vm_4bf529eb-b7b9-4ca7-a55a-73fd7d58ac81/manager/0.log" Jan 26 14:40:53 crc kubenswrapper[4844]: I0126 14:40:53.120652 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b85478v8f_12e4b3b0-81a4-4752-8cea-e1a3178d38ba/manager/0.log" Jan 26 14:40:53 crc kubenswrapper[4844]: I0126 14:40:53.317294 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-54d8cfbbfb-9bfgj_d2118529-9df3-486e-9f15-3a54c55d9eb1/operator/0.log" Jan 26 14:40:53 crc kubenswrapper[4844]: I0126 14:40:53.603248 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nql7g_bfb7276b-b13e-43c2-ae22-0165b6e3a68f/registry-server/0.log" Jan 26 14:40:53 crc kubenswrapper[4844]: I0126 14:40:53.674003 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-l7w8f_89ab862c-0d6a-4a44-9f28-9195e0213328/manager/0.log" Jan 26 14:40:53 crc kubenswrapper[4844]: I0126 14:40:53.928618 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-mkcr9_3a13e1fa-35b1-4adc-a21d-a09aa4ec91a7/manager/0.log" Jan 26 14:40:54 crc kubenswrapper[4844]: I0126 14:40:54.107345 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-8s4vt_e99dde4f-0ab1-45ad-b6c0-e5225fbfc77d/operator/0.log" Jan 26 14:40:54 crc kubenswrapper[4844]: I0126 14:40:54.218000 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-88kvh_00b0af83-1dea-44ab-b074-fa7b5c9cf46d/manager/0.log" Jan 26 14:40:54 crc kubenswrapper[4844]: I0126 14:40:54.755211 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-dgglg_915eea77-c5eb-4e5c-b9f2-404ba732dac8/manager/0.log" Jan 26 14:40:55 crc kubenswrapper[4844]: I0126 14:40:55.079336 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-fj29j_9fb0454b-90d4-48f3-b069-86aada20e9f9/manager/0.log" Jan 26 14:40:55 crc kubenswrapper[4844]: I0126 14:40:55.110677 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5fc5788b68-9qjpz_c74ba998-8b13-4a63-a4b3-d027f70ff41d/manager/0.log" Jan 26 14:40:55 crc kubenswrapper[4844]: I0126 14:40:55.194047 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6b75585dc8-tzrcv_dd52b1ad-222e-4b57-91e0-869bd8094adc/manager/0.log" Jan 26 14:41:15 crc kubenswrapper[4844]: I0126 14:41:15.600626 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qltc7_10b7b789-0c46-4e84-875e-f74c68981bca/control-plane-machine-set-operator/0.log" Jan 26 14:41:15 crc kubenswrapper[4844]: I0126 14:41:15.760013 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zsn9c_4fd9b862-74de-4579-9b30-b51e5cbd3b56/machine-api-operator/0.log" Jan 26 14:41:15 crc kubenswrapper[4844]: I0126 14:41:15.772732 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zsn9c_4fd9b862-74de-4579-9b30-b51e5cbd3b56/kube-rbac-proxy/0.log" Jan 26 14:41:28 crc kubenswrapper[4844]: I0126 14:41:28.662165 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vhvzj_65d6aa35-f205-43c2-ad68-0bfa252093be/cert-manager-controller/0.log" Jan 26 14:41:28 crc kubenswrapper[4844]: I0126 14:41:28.797810 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-dv29d_a25263f7-0e4e-4253-abe6-20b223dc600e/cert-manager-cainjector/0.log" Jan 26 14:41:28 crc kubenswrapper[4844]: I0126 14:41:28.883090 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-7xbzs_97f29a7d-977c-41c6-8756-d6e5d6a35875/cert-manager-webhook/0.log" Jan 26 14:41:36 crc kubenswrapper[4844]: I0126 14:41:36.365182 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:41:36 crc kubenswrapper[4844]: I0126 14:41:36.365964 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:41:42 crc kubenswrapper[4844]: I0126 14:41:42.110388 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-qdxvv_213e48c5-2b34-4d8a-af54-773da9caddb5/nmstate-console-plugin/0.log" Jan 26 14:41:42 crc kubenswrapper[4844]: I0126 14:41:42.264095 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vgnf8_bcef572e-5718-4586-b0e3-907551cdf0ff/kube-rbac-proxy/0.log" Jan 26 14:41:42 crc kubenswrapper[4844]: I0126 14:41:42.267699 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-2d462_9baf25b3-6096-4215-9455-b9126c02ffcf/nmstate-handler/0.log" Jan 26 14:41:42 crc kubenswrapper[4844]: I0126 14:41:42.398491 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vgnf8_bcef572e-5718-4586-b0e3-907551cdf0ff/nmstate-metrics/0.log" Jan 26 14:41:42 crc kubenswrapper[4844]: I0126 14:41:42.437259 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-9djrz_0c0a3ca8-870a-4c95-a1a0-002e4cdb3bb8/nmstate-operator/0.log" Jan 26 14:41:42 crc kubenswrapper[4844]: I0126 14:41:42.572799 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-blwvj_68790915-1674-4d77-8d03-d21698da101e/nmstate-webhook/0.log" Jan 26 14:41:56 crc kubenswrapper[4844]: I0126 14:41:56.287442 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-dg7zb_1dec1dad-33cd-4ea8-9f69-9e69e0f56e73/prometheus-operator/0.log" Jan 26 14:41:56 crc kubenswrapper[4844]: I0126 14:41:56.434374 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b87948799-68hvv_321b4c21-0d4a-49d5-a14a-9f49e2ea5600/prometheus-operator-admission-webhook/0.log" Jan 26 14:41:56 crc kubenswrapper[4844]: I0126 14:41:56.467825 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b87948799-mvsq5_b2533187-bdf5-44b9-a05d-ceb2e2ea467b/prometheus-operator-admission-webhook/0.log" Jan 26 14:41:56 crc kubenswrapper[4844]: I0126 14:41:56.648325 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-clgj9_50efd8fd-16d6-4d82-a9f0-ea82c4d50c4c/operator/0.log" Jan 26 14:41:56 crc kubenswrapper[4844]: I0126 14:41:56.651475 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-sjw9j_a9734a40-f918-40da-9931-7d55904a646a/perses-operator/0.log" Jan 26 14:42:06 crc kubenswrapper[4844]: I0126 14:42:06.365206 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:42:06 crc kubenswrapper[4844]: I0126 14:42:06.365849 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:42:10 crc kubenswrapper[4844]: I0126 14:42:10.834264 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6qx7f_a5381cf1-7e94-4ac0-9054-ed80ebf76624/kube-rbac-proxy/0.log" Jan 26 14:42:10 crc kubenswrapper[4844]: I0126 14:42:10.981190 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6qx7f_a5381cf1-7e94-4ac0-9054-ed80ebf76624/controller/0.log" Jan 26 14:42:11 crc kubenswrapper[4844]: I0126 14:42:11.067942 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-frr-files/0.log" Jan 26 14:42:11 crc kubenswrapper[4844]: I0126 14:42:11.175885 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-frr-files/0.log" Jan 26 14:42:11 crc kubenswrapper[4844]: I0126 14:42:11.188226 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-reloader/0.log" Jan 26 14:42:11 crc kubenswrapper[4844]: I0126 14:42:11.241132 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-metrics/0.log" Jan 26 14:42:11 crc kubenswrapper[4844]: I0126 14:42:11.281340 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-reloader/0.log" Jan 26 14:42:11 crc kubenswrapper[4844]: I0126 14:42:11.428015 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-reloader/0.log" Jan 26 14:42:11 crc kubenswrapper[4844]: I0126 14:42:11.441291 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-frr-files/0.log" Jan 26 14:42:11 crc kubenswrapper[4844]: I0126 14:42:11.467562 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-metrics/0.log" Jan 26 14:42:11 crc kubenswrapper[4844]: I0126 14:42:11.522611 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-metrics/0.log" Jan 26 14:42:11 crc kubenswrapper[4844]: I0126 14:42:11.663983 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-frr-files/0.log" Jan 26 14:42:11 crc kubenswrapper[4844]: I0126 14:42:11.669639 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-reloader/0.log" Jan 26 14:42:11 crc kubenswrapper[4844]: I0126 14:42:11.680869 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-metrics/0.log" Jan 26 14:42:11 crc kubenswrapper[4844]: I0126 14:42:11.693897 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/controller/0.log" Jan 26 14:42:11 crc kubenswrapper[4844]: I0126 14:42:11.897037 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/kube-rbac-proxy-frr/0.log" Jan 26 14:42:11 crc kubenswrapper[4844]: I0126 14:42:11.980777 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/frr-metrics/0.log" Jan 26 14:42:11 crc kubenswrapper[4844]: I0126 14:42:11.988389 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/kube-rbac-proxy/0.log" Jan 26 14:42:12 crc kubenswrapper[4844]: I0126 14:42:12.096782 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/reloader/0.log" Jan 26 14:42:12 crc kubenswrapper[4844]: I0126 14:42:12.218050 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-5tzp4_08638bb5-906c-4f51-9437-8667d323feae/frr-k8s-webhook-server/0.log" Jan 26 14:42:12 crc kubenswrapper[4844]: I0126 14:42:12.493662 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-59ccf49fff-tmmnh_03a2059f-ed6b-49f5-9476-bf21d424567f/manager/0.log" Jan 26 14:42:12 crc kubenswrapper[4844]: I0126 14:42:12.607324 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-56567ff486-jdjng_2d1458da-4eb4-4e5a-ae05-399cb9e40dda/webhook-server/0.log" Jan 26 14:42:12 crc kubenswrapper[4844]: I0126 14:42:12.776214 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qtw5d_eadfd892-6882-4514-abcd-e68612f9eecf/kube-rbac-proxy/0.log" Jan 26 14:42:13 crc kubenswrapper[4844]: I0126 14:42:13.311920 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qtw5d_eadfd892-6882-4514-abcd-e68612f9eecf/speaker/0.log" Jan 26 14:42:13 crc kubenswrapper[4844]: I0126 14:42:13.741177 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/frr/0.log" Jan 26 14:42:26 crc kubenswrapper[4844]: I0126 14:42:26.902275 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j_a04410f5-0ebb-4519-9806-a0210b9fdfdc/util/0.log" Jan 26 14:42:27 crc kubenswrapper[4844]: I0126 14:42:27.284004 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j_a04410f5-0ebb-4519-9806-a0210b9fdfdc/util/0.log" Jan 26 14:42:27 crc kubenswrapper[4844]: I0126 14:42:27.288658 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j_a04410f5-0ebb-4519-9806-a0210b9fdfdc/pull/0.log" Jan 26 14:42:27 crc kubenswrapper[4844]: I0126 14:42:27.326636 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j_a04410f5-0ebb-4519-9806-a0210b9fdfdc/pull/0.log" Jan 26 14:42:27 crc kubenswrapper[4844]: I0126 14:42:27.706658 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j_a04410f5-0ebb-4519-9806-a0210b9fdfdc/extract/0.log" Jan 26 14:42:27 crc kubenswrapper[4844]: I0126 14:42:27.801779 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j_a04410f5-0ebb-4519-9806-a0210b9fdfdc/util/0.log" Jan 26 14:42:27 crc kubenswrapper[4844]: I0126 14:42:27.843527 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j_a04410f5-0ebb-4519-9806-a0210b9fdfdc/pull/0.log" Jan 26 14:42:27 crc kubenswrapper[4844]: I0126 14:42:27.926257 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2_b2b5f908-45d0-4977-93ce-6e5842a166cc/util/0.log" Jan 26 14:42:28 crc kubenswrapper[4844]: I0126 14:42:28.085429 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2_b2b5f908-45d0-4977-93ce-6e5842a166cc/util/0.log" Jan 26 14:42:28 crc kubenswrapper[4844]: I0126 14:42:28.131930 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2_b2b5f908-45d0-4977-93ce-6e5842a166cc/pull/0.log" Jan 26 14:42:28 crc kubenswrapper[4844]: I0126 14:42:28.135999 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2_b2b5f908-45d0-4977-93ce-6e5842a166cc/pull/0.log" Jan 26 14:42:28 crc kubenswrapper[4844]: I0126 14:42:28.318146 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2_b2b5f908-45d0-4977-93ce-6e5842a166cc/pull/0.log" Jan 26 14:42:28 crc kubenswrapper[4844]: I0126 14:42:28.355104 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2_b2b5f908-45d0-4977-93ce-6e5842a166cc/util/0.log" Jan 26 14:42:28 crc kubenswrapper[4844]: I0126 14:42:28.436396 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2_b2b5f908-45d0-4977-93ce-6e5842a166cc/extract/0.log" Jan 26 14:42:28 crc kubenswrapper[4844]: I0126 14:42:28.536629 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh_bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc/util/0.log" Jan 26 14:42:28 crc kubenswrapper[4844]: I0126 14:42:28.738772 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh_bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc/pull/0.log" Jan 26 14:42:28 crc kubenswrapper[4844]: I0126 14:42:28.756765 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh_bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc/pull/0.log" Jan 26 14:42:28 crc kubenswrapper[4844]: I0126 14:42:28.757883 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh_bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc/util/0.log" Jan 26 14:42:28 crc kubenswrapper[4844]: I0126 14:42:28.949442 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh_bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc/util/0.log" Jan 26 14:42:28 crc kubenswrapper[4844]: I0126 14:42:28.952327 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh_bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc/extract/0.log" Jan 26 14:42:28 crc kubenswrapper[4844]: I0126 14:42:28.955248 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh_bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc/pull/0.log" Jan 26 14:42:29 crc kubenswrapper[4844]: I0126 14:42:29.149765 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bnfk2_be6c04bd-58fc-41e9-bdfa-facc3fc12358/extract-utilities/0.log" Jan 26 14:42:29 crc kubenswrapper[4844]: I0126 14:42:29.378173 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bnfk2_be6c04bd-58fc-41e9-bdfa-facc3fc12358/extract-utilities/0.log" Jan 26 14:42:29 crc kubenswrapper[4844]: I0126 14:42:29.394278 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bnfk2_be6c04bd-58fc-41e9-bdfa-facc3fc12358/extract-content/0.log" Jan 26 14:42:29 crc kubenswrapper[4844]: I0126 14:42:29.396398 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bnfk2_be6c04bd-58fc-41e9-bdfa-facc3fc12358/extract-content/0.log" Jan 26 14:42:29 crc kubenswrapper[4844]: I0126 14:42:29.611511 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bnfk2_be6c04bd-58fc-41e9-bdfa-facc3fc12358/extract-utilities/0.log" Jan 26 14:42:29 crc kubenswrapper[4844]: I0126 14:42:29.622773 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bnfk2_be6c04bd-58fc-41e9-bdfa-facc3fc12358/extract-content/0.log" Jan 26 14:42:29 crc kubenswrapper[4844]: I0126 14:42:29.831039 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9ckcc_5fab62d0-54ca-4d28-b84b-5c66d8bf0887/extract-utilities/0.log" Jan 26 14:42:30 crc kubenswrapper[4844]: I0126 14:42:30.075173 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9ckcc_5fab62d0-54ca-4d28-b84b-5c66d8bf0887/extract-content/0.log" Jan 26 14:42:30 crc kubenswrapper[4844]: I0126 14:42:30.096226 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9ckcc_5fab62d0-54ca-4d28-b84b-5c66d8bf0887/extract-utilities/0.log" Jan 26 14:42:30 crc kubenswrapper[4844]: I0126 14:42:30.229766 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9ckcc_5fab62d0-54ca-4d28-b84b-5c66d8bf0887/extract-content/0.log" Jan 26 14:42:30 crc kubenswrapper[4844]: I0126 14:42:30.286136 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bnfk2_be6c04bd-58fc-41e9-bdfa-facc3fc12358/registry-server/0.log" Jan 26 14:42:30 crc kubenswrapper[4844]: I0126 14:42:30.353293 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9ckcc_5fab62d0-54ca-4d28-b84b-5c66d8bf0887/extract-content/0.log" Jan 26 14:42:30 crc kubenswrapper[4844]: I0126 14:42:30.354929 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9ckcc_5fab62d0-54ca-4d28-b84b-5c66d8bf0887/extract-utilities/0.log" Jan 26 14:42:30 crc kubenswrapper[4844]: I0126 14:42:30.571338 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-q4p7z_5374369b-4aee-4c66-98fe-7bb183b4fdfa/marketplace-operator/0.log" Jan 26 14:42:30 crc kubenswrapper[4844]: I0126 14:42:30.813027 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jjx57_a4779355-4fd0-4b1d-adef-3e4ebba15903/extract-utilities/0.log" Jan 26 14:42:30 crc kubenswrapper[4844]: I0126 14:42:30.946076 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jjx57_a4779355-4fd0-4b1d-adef-3e4ebba15903/extract-utilities/0.log" Jan 26 14:42:30 crc kubenswrapper[4844]: I0126 14:42:30.956755 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jjx57_a4779355-4fd0-4b1d-adef-3e4ebba15903/extract-content/0.log" Jan 26 14:42:31 crc kubenswrapper[4844]: I0126 14:42:31.053732 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jjx57_a4779355-4fd0-4b1d-adef-3e4ebba15903/extract-content/0.log" Jan 26 14:42:31 crc kubenswrapper[4844]: I0126 14:42:31.262687 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jjx57_a4779355-4fd0-4b1d-adef-3e4ebba15903/extract-content/0.log" Jan 26 14:42:31 crc kubenswrapper[4844]: I0126 14:42:31.281106 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jjx57_a4779355-4fd0-4b1d-adef-3e4ebba15903/extract-utilities/0.log" Jan 26 14:42:31 crc kubenswrapper[4844]: I0126 14:42:31.452863 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m8rzx_9cf02a58-0976-482c-9e29-b8cb52254a3b/extract-utilities/0.log" Jan 26 14:42:31 crc kubenswrapper[4844]: I0126 14:42:31.639911 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jjx57_a4779355-4fd0-4b1d-adef-3e4ebba15903/registry-server/0.log" Jan 26 14:42:31 crc kubenswrapper[4844]: I0126 14:42:31.670798 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9ckcc_5fab62d0-54ca-4d28-b84b-5c66d8bf0887/registry-server/0.log" Jan 26 14:42:31 crc kubenswrapper[4844]: I0126 14:42:31.714454 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m8rzx_9cf02a58-0976-482c-9e29-b8cb52254a3b/extract-utilities/0.log" Jan 26 14:42:31 crc kubenswrapper[4844]: I0126 14:42:31.727428 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m8rzx_9cf02a58-0976-482c-9e29-b8cb52254a3b/extract-content/0.log" Jan 26 14:42:31 crc kubenswrapper[4844]: I0126 14:42:31.728643 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m8rzx_9cf02a58-0976-482c-9e29-b8cb52254a3b/extract-content/0.log" Jan 26 14:42:31 crc kubenswrapper[4844]: I0126 14:42:31.953329 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m8rzx_9cf02a58-0976-482c-9e29-b8cb52254a3b/extract-content/0.log" Jan 26 14:42:31 crc kubenswrapper[4844]: I0126 14:42:31.982951 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m8rzx_9cf02a58-0976-482c-9e29-b8cb52254a3b/extract-utilities/0.log" Jan 26 14:42:32 crc kubenswrapper[4844]: I0126 14:42:32.690343 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m8rzx_9cf02a58-0976-482c-9e29-b8cb52254a3b/registry-server/0.log" Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.052462 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hvrq2"] Jan 26 14:42:36 crc kubenswrapper[4844]: E0126 14:42:36.055707 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a7f7221-0fba-4c0a-9a2d-f9240935546e" containerName="container-00" Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.055961 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a7f7221-0fba-4c0a-9a2d-f9240935546e" containerName="container-00" Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.056372 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a7f7221-0fba-4c0a-9a2d-f9240935546e" containerName="container-00" Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.058538 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvrq2" Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.074773 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvrq2"] Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.129271 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k7mj\" (UniqueName: \"kubernetes.io/projected/95d45b0c-7e7f-4dd8-a8bc-84694255d656-kube-api-access-4k7mj\") pod \"redhat-marketplace-hvrq2\" (UID: \"95d45b0c-7e7f-4dd8-a8bc-84694255d656\") " pod="openshift-marketplace/redhat-marketplace-hvrq2" Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.129355 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d45b0c-7e7f-4dd8-a8bc-84694255d656-catalog-content\") pod \"redhat-marketplace-hvrq2\" (UID: \"95d45b0c-7e7f-4dd8-a8bc-84694255d656\") " pod="openshift-marketplace/redhat-marketplace-hvrq2" Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.129505 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d45b0c-7e7f-4dd8-a8bc-84694255d656-utilities\") pod \"redhat-marketplace-hvrq2\" (UID: \"95d45b0c-7e7f-4dd8-a8bc-84694255d656\") " pod="openshift-marketplace/redhat-marketplace-hvrq2" Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.232054 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k7mj\" (UniqueName: \"kubernetes.io/projected/95d45b0c-7e7f-4dd8-a8bc-84694255d656-kube-api-access-4k7mj\") pod \"redhat-marketplace-hvrq2\" (UID: \"95d45b0c-7e7f-4dd8-a8bc-84694255d656\") " pod="openshift-marketplace/redhat-marketplace-hvrq2" Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.232180 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d45b0c-7e7f-4dd8-a8bc-84694255d656-catalog-content\") pod \"redhat-marketplace-hvrq2\" (UID: \"95d45b0c-7e7f-4dd8-a8bc-84694255d656\") " pod="openshift-marketplace/redhat-marketplace-hvrq2" Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.232257 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d45b0c-7e7f-4dd8-a8bc-84694255d656-utilities\") pod \"redhat-marketplace-hvrq2\" (UID: \"95d45b0c-7e7f-4dd8-a8bc-84694255d656\") " pod="openshift-marketplace/redhat-marketplace-hvrq2" Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.232768 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d45b0c-7e7f-4dd8-a8bc-84694255d656-utilities\") pod \"redhat-marketplace-hvrq2\" (UID: \"95d45b0c-7e7f-4dd8-a8bc-84694255d656\") " pod="openshift-marketplace/redhat-marketplace-hvrq2" Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.232765 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d45b0c-7e7f-4dd8-a8bc-84694255d656-catalog-content\") pod \"redhat-marketplace-hvrq2\" (UID: \"95d45b0c-7e7f-4dd8-a8bc-84694255d656\") " pod="openshift-marketplace/redhat-marketplace-hvrq2" Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.252230 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k7mj\" (UniqueName: \"kubernetes.io/projected/95d45b0c-7e7f-4dd8-a8bc-84694255d656-kube-api-access-4k7mj\") pod \"redhat-marketplace-hvrq2\" (UID: \"95d45b0c-7e7f-4dd8-a8bc-84694255d656\") " pod="openshift-marketplace/redhat-marketplace-hvrq2" Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.365316 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.365684 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.365733 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.366506 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.366562 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" gracePeriod=600 Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.379979 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvrq2" Jan 26 14:42:36 crc kubenswrapper[4844]: E0126 14:42:36.503503 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:42:36 crc kubenswrapper[4844]: I0126 14:42:36.935570 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvrq2"] Jan 26 14:42:37 crc kubenswrapper[4844]: I0126 14:42:37.489006 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" exitCode=0 Jan 26 14:42:37 crc kubenswrapper[4844]: I0126 14:42:37.489050 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a"} Jan 26 14:42:37 crc kubenswrapper[4844]: I0126 14:42:37.489355 4844 scope.go:117] "RemoveContainer" containerID="1b662f3876628db4e3e14d2a4b83b69e591a54d9e073c177db60f5cee583d50b" Jan 26 14:42:37 crc kubenswrapper[4844]: I0126 14:42:37.490091 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:42:37 crc kubenswrapper[4844]: E0126 14:42:37.490501 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:42:37 crc kubenswrapper[4844]: I0126 14:42:37.491714 4844 generic.go:334] "Generic (PLEG): container finished" podID="95d45b0c-7e7f-4dd8-a8bc-84694255d656" containerID="54173d4dbdfd390a30f082c376f33349cdc21a86b2b5c7b3776099b7521b6de9" exitCode=0 Jan 26 14:42:37 crc kubenswrapper[4844]: I0126 14:42:37.491750 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvrq2" event={"ID":"95d45b0c-7e7f-4dd8-a8bc-84694255d656","Type":"ContainerDied","Data":"54173d4dbdfd390a30f082c376f33349cdc21a86b2b5c7b3776099b7521b6de9"} Jan 26 14:42:37 crc kubenswrapper[4844]: I0126 14:42:37.491781 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvrq2" event={"ID":"95d45b0c-7e7f-4dd8-a8bc-84694255d656","Type":"ContainerStarted","Data":"5a11dbc68bbc670e0ef05fadb5cf68af47dd78822007b236e0f4220ef0b8d197"} Jan 26 14:42:37 crc kubenswrapper[4844]: I0126 14:42:37.495157 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:42:38 crc kubenswrapper[4844]: I0126 14:42:38.502313 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvrq2" event={"ID":"95d45b0c-7e7f-4dd8-a8bc-84694255d656","Type":"ContainerStarted","Data":"f280e8793c8447511ae6cfd4612fdac5abf48afd188e675410c15ec2d0dd2e1d"} Jan 26 14:42:39 crc kubenswrapper[4844]: I0126 14:42:39.523590 4844 generic.go:334] "Generic (PLEG): container finished" podID="95d45b0c-7e7f-4dd8-a8bc-84694255d656" containerID="f280e8793c8447511ae6cfd4612fdac5abf48afd188e675410c15ec2d0dd2e1d" exitCode=0 Jan 26 14:42:39 crc kubenswrapper[4844]: I0126 14:42:39.523958 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvrq2" event={"ID":"95d45b0c-7e7f-4dd8-a8bc-84694255d656","Type":"ContainerDied","Data":"f280e8793c8447511ae6cfd4612fdac5abf48afd188e675410c15ec2d0dd2e1d"} Jan 26 14:42:41 crc kubenswrapper[4844]: I0126 14:42:41.548986 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvrq2" event={"ID":"95d45b0c-7e7f-4dd8-a8bc-84694255d656","Type":"ContainerStarted","Data":"d1259f204d9b20b60c39591bb925104070e14c052930dade1e34a31d94b547bb"} Jan 26 14:42:41 crc kubenswrapper[4844]: I0126 14:42:41.582571 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hvrq2" podStartSLOduration=2.372963674 podStartE2EDuration="5.582550252s" podCreationTimestamp="2026-01-26 14:42:36 +0000 UTC" firstStartedPulling="2026-01-26 14:42:37.494903658 +0000 UTC m=+7134.428271280" lastFinishedPulling="2026-01-26 14:42:40.704490236 +0000 UTC m=+7137.637857858" observedRunningTime="2026-01-26 14:42:41.570251413 +0000 UTC m=+7138.503619025" watchObservedRunningTime="2026-01-26 14:42:41.582550252 +0000 UTC m=+7138.515917874" Jan 26 14:42:46 crc kubenswrapper[4844]: I0126 14:42:46.380762 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hvrq2" Jan 26 14:42:46 crc kubenswrapper[4844]: I0126 14:42:46.381266 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hvrq2" Jan 26 14:42:46 crc kubenswrapper[4844]: I0126 14:42:46.444264 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hvrq2" Jan 26 14:42:46 crc kubenswrapper[4844]: I0126 14:42:46.639575 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hvrq2" Jan 26 14:42:47 crc kubenswrapper[4844]: I0126 14:42:47.036892 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-dg7zb_1dec1dad-33cd-4ea8-9f69-9e69e0f56e73/prometheus-operator/0.log" Jan 26 14:42:47 crc kubenswrapper[4844]: I0126 14:42:47.051404 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b87948799-68hvv_321b4c21-0d4a-49d5-a14a-9f49e2ea5600/prometheus-operator-admission-webhook/0.log" Jan 26 14:42:47 crc kubenswrapper[4844]: I0126 14:42:47.052561 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b87948799-mvsq5_b2533187-bdf5-44b9-a05d-ceb2e2ea467b/prometheus-operator-admission-webhook/0.log" Jan 26 14:42:47 crc kubenswrapper[4844]: I0126 14:42:47.278307 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-clgj9_50efd8fd-16d6-4d82-a9f0-ea82c4d50c4c/operator/0.log" Jan 26 14:42:47 crc kubenswrapper[4844]: I0126 14:42:47.398040 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-sjw9j_a9734a40-f918-40da-9931-7d55904a646a/perses-operator/0.log" Jan 26 14:42:49 crc kubenswrapper[4844]: I0126 14:42:49.918553 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvrq2"] Jan 26 14:42:49 crc kubenswrapper[4844]: I0126 14:42:49.919255 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hvrq2" podUID="95d45b0c-7e7f-4dd8-a8bc-84694255d656" containerName="registry-server" containerID="cri-o://d1259f204d9b20b60c39591bb925104070e14c052930dade1e34a31d94b547bb" gracePeriod=2 Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.419861 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvrq2" Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.531765 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d45b0c-7e7f-4dd8-a8bc-84694255d656-utilities\") pod \"95d45b0c-7e7f-4dd8-a8bc-84694255d656\" (UID: \"95d45b0c-7e7f-4dd8-a8bc-84694255d656\") " Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.532092 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k7mj\" (UniqueName: \"kubernetes.io/projected/95d45b0c-7e7f-4dd8-a8bc-84694255d656-kube-api-access-4k7mj\") pod \"95d45b0c-7e7f-4dd8-a8bc-84694255d656\" (UID: \"95d45b0c-7e7f-4dd8-a8bc-84694255d656\") " Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.532145 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d45b0c-7e7f-4dd8-a8bc-84694255d656-catalog-content\") pod \"95d45b0c-7e7f-4dd8-a8bc-84694255d656\" (UID: \"95d45b0c-7e7f-4dd8-a8bc-84694255d656\") " Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.532692 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95d45b0c-7e7f-4dd8-a8bc-84694255d656-utilities" (OuterVolumeSpecName: "utilities") pod "95d45b0c-7e7f-4dd8-a8bc-84694255d656" (UID: "95d45b0c-7e7f-4dd8-a8bc-84694255d656"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.532953 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d45b0c-7e7f-4dd8-a8bc-84694255d656-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.547165 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95d45b0c-7e7f-4dd8-a8bc-84694255d656-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95d45b0c-7e7f-4dd8-a8bc-84694255d656" (UID: "95d45b0c-7e7f-4dd8-a8bc-84694255d656"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.570701 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95d45b0c-7e7f-4dd8-a8bc-84694255d656-kube-api-access-4k7mj" (OuterVolumeSpecName: "kube-api-access-4k7mj") pod "95d45b0c-7e7f-4dd8-a8bc-84694255d656" (UID: "95d45b0c-7e7f-4dd8-a8bc-84694255d656"). InnerVolumeSpecName "kube-api-access-4k7mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.634740 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k7mj\" (UniqueName: \"kubernetes.io/projected/95d45b0c-7e7f-4dd8-a8bc-84694255d656-kube-api-access-4k7mj\") on node \"crc\" DevicePath \"\"" Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.635011 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d45b0c-7e7f-4dd8-a8bc-84694255d656-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.636297 4844 generic.go:334] "Generic (PLEG): container finished" podID="95d45b0c-7e7f-4dd8-a8bc-84694255d656" containerID="d1259f204d9b20b60c39591bb925104070e14c052930dade1e34a31d94b547bb" exitCode=0 Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.636341 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvrq2" event={"ID":"95d45b0c-7e7f-4dd8-a8bc-84694255d656","Type":"ContainerDied","Data":"d1259f204d9b20b60c39591bb925104070e14c052930dade1e34a31d94b547bb"} Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.636377 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvrq2" event={"ID":"95d45b0c-7e7f-4dd8-a8bc-84694255d656","Type":"ContainerDied","Data":"5a11dbc68bbc670e0ef05fadb5cf68af47dd78822007b236e0f4220ef0b8d197"} Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.636388 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvrq2" Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.636396 4844 scope.go:117] "RemoveContainer" containerID="d1259f204d9b20b60c39591bb925104070e14c052930dade1e34a31d94b547bb" Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.655123 4844 scope.go:117] "RemoveContainer" containerID="f280e8793c8447511ae6cfd4612fdac5abf48afd188e675410c15ec2d0dd2e1d" Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.674170 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvrq2"] Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.683659 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvrq2"] Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.692902 4844 scope.go:117] "RemoveContainer" containerID="54173d4dbdfd390a30f082c376f33349cdc21a86b2b5c7b3776099b7521b6de9" Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.729137 4844 scope.go:117] "RemoveContainer" containerID="d1259f204d9b20b60c39591bb925104070e14c052930dade1e34a31d94b547bb" Jan 26 14:42:50 crc kubenswrapper[4844]: E0126 14:42:50.730244 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1259f204d9b20b60c39591bb925104070e14c052930dade1e34a31d94b547bb\": container with ID starting with d1259f204d9b20b60c39591bb925104070e14c052930dade1e34a31d94b547bb not found: ID does not exist" containerID="d1259f204d9b20b60c39591bb925104070e14c052930dade1e34a31d94b547bb" Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.730288 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1259f204d9b20b60c39591bb925104070e14c052930dade1e34a31d94b547bb"} err="failed to get container status \"d1259f204d9b20b60c39591bb925104070e14c052930dade1e34a31d94b547bb\": rpc error: code = NotFound desc = could not find container \"d1259f204d9b20b60c39591bb925104070e14c052930dade1e34a31d94b547bb\": container with ID starting with d1259f204d9b20b60c39591bb925104070e14c052930dade1e34a31d94b547bb not found: ID does not exist" Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.730315 4844 scope.go:117] "RemoveContainer" containerID="f280e8793c8447511ae6cfd4612fdac5abf48afd188e675410c15ec2d0dd2e1d" Jan 26 14:42:50 crc kubenswrapper[4844]: E0126 14:42:50.730725 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f280e8793c8447511ae6cfd4612fdac5abf48afd188e675410c15ec2d0dd2e1d\": container with ID starting with f280e8793c8447511ae6cfd4612fdac5abf48afd188e675410c15ec2d0dd2e1d not found: ID does not exist" containerID="f280e8793c8447511ae6cfd4612fdac5abf48afd188e675410c15ec2d0dd2e1d" Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.730761 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f280e8793c8447511ae6cfd4612fdac5abf48afd188e675410c15ec2d0dd2e1d"} err="failed to get container status \"f280e8793c8447511ae6cfd4612fdac5abf48afd188e675410c15ec2d0dd2e1d\": rpc error: code = NotFound desc = could not find container \"f280e8793c8447511ae6cfd4612fdac5abf48afd188e675410c15ec2d0dd2e1d\": container with ID starting with f280e8793c8447511ae6cfd4612fdac5abf48afd188e675410c15ec2d0dd2e1d not found: ID does not exist" Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.730785 4844 scope.go:117] "RemoveContainer" containerID="54173d4dbdfd390a30f082c376f33349cdc21a86b2b5c7b3776099b7521b6de9" Jan 26 14:42:50 crc kubenswrapper[4844]: E0126 14:42:50.731120 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54173d4dbdfd390a30f082c376f33349cdc21a86b2b5c7b3776099b7521b6de9\": container with ID starting with 54173d4dbdfd390a30f082c376f33349cdc21a86b2b5c7b3776099b7521b6de9 not found: ID does not exist" containerID="54173d4dbdfd390a30f082c376f33349cdc21a86b2b5c7b3776099b7521b6de9" Jan 26 14:42:50 crc kubenswrapper[4844]: I0126 14:42:50.731140 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54173d4dbdfd390a30f082c376f33349cdc21a86b2b5c7b3776099b7521b6de9"} err="failed to get container status \"54173d4dbdfd390a30f082c376f33349cdc21a86b2b5c7b3776099b7521b6de9\": rpc error: code = NotFound desc = could not find container \"54173d4dbdfd390a30f082c376f33349cdc21a86b2b5c7b3776099b7521b6de9\": container with ID starting with 54173d4dbdfd390a30f082c376f33349cdc21a86b2b5c7b3776099b7521b6de9 not found: ID does not exist" Jan 26 14:42:51 crc kubenswrapper[4844]: I0126 14:42:51.332358 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:42:51 crc kubenswrapper[4844]: E0126 14:42:51.332798 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:42:51 crc kubenswrapper[4844]: I0126 14:42:51.335782 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95d45b0c-7e7f-4dd8-a8bc-84694255d656" path="/var/lib/kubelet/pods/95d45b0c-7e7f-4dd8-a8bc-84694255d656/volumes" Jan 26 14:43:03 crc kubenswrapper[4844]: I0126 14:43:03.319834 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:43:03 crc kubenswrapper[4844]: E0126 14:43:03.320725 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:43:17 crc kubenswrapper[4844]: I0126 14:43:17.314023 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:43:17 crc kubenswrapper[4844]: E0126 14:43:17.315437 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:43:30 crc kubenswrapper[4844]: I0126 14:43:30.317010 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:43:30 crc kubenswrapper[4844]: E0126 14:43:30.317967 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:43:44 crc kubenswrapper[4844]: I0126 14:43:44.314742 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:43:44 crc kubenswrapper[4844]: E0126 14:43:44.316279 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:43:55 crc kubenswrapper[4844]: I0126 14:43:55.313742 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:43:55 crc kubenswrapper[4844]: E0126 14:43:55.314759 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:44:10 crc kubenswrapper[4844]: I0126 14:44:10.313421 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:44:10 crc kubenswrapper[4844]: E0126 14:44:10.314235 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:44:23 crc kubenswrapper[4844]: I0126 14:44:23.327549 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:44:23 crc kubenswrapper[4844]: E0126 14:44:23.328575 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:44:34 crc kubenswrapper[4844]: I0126 14:44:34.315277 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:44:34 crc kubenswrapper[4844]: E0126 14:44:34.316271 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:44:47 crc kubenswrapper[4844]: I0126 14:44:47.313846 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:44:47 crc kubenswrapper[4844]: E0126 14:44:47.314732 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:44:49 crc kubenswrapper[4844]: I0126 14:44:49.265000 4844 scope.go:117] "RemoveContainer" containerID="f09b14eab4abf34efec4429cc8d2f18629a17ec34d81e6fa6a6dbab439131a23" Jan 26 14:44:51 crc kubenswrapper[4844]: I0126 14:44:51.111535 4844 generic.go:334] "Generic (PLEG): container finished" podID="1674b4f8-c352-44c3-a14a-f81e006c3586" containerID="4a69a08b8984212e046f23d1d0ae2f908bde143d91582d552c7c9ea8404e9554" exitCode=0 Jan 26 14:44:51 crc kubenswrapper[4844]: I0126 14:44:51.111670 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cwwwg/must-gather-dhvsk" event={"ID":"1674b4f8-c352-44c3-a14a-f81e006c3586","Type":"ContainerDied","Data":"4a69a08b8984212e046f23d1d0ae2f908bde143d91582d552c7c9ea8404e9554"} Jan 26 14:44:51 crc kubenswrapper[4844]: I0126 14:44:51.112977 4844 scope.go:117] "RemoveContainer" containerID="4a69a08b8984212e046f23d1d0ae2f908bde143d91582d552c7c9ea8404e9554" Jan 26 14:44:51 crc kubenswrapper[4844]: I0126 14:44:51.329071 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cwwwg_must-gather-dhvsk_1674b4f8-c352-44c3-a14a-f81e006c3586/gather/0.log" Jan 26 14:44:58 crc kubenswrapper[4844]: I0126 14:44:58.313495 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:44:58 crc kubenswrapper[4844]: E0126 14:44:58.314434 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:44:59 crc kubenswrapper[4844]: I0126 14:44:59.951412 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cwwwg/must-gather-dhvsk"] Jan 26 14:44:59 crc kubenswrapper[4844]: I0126 14:44:59.952028 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-cwwwg/must-gather-dhvsk" podUID="1674b4f8-c352-44c3-a14a-f81e006c3586" containerName="copy" containerID="cri-o://10405af140884f01e92023eb147986bc6696b13c12350b1eae03ca6376d1e90f" gracePeriod=2 Jan 26 14:44:59 crc kubenswrapper[4844]: I0126 14:44:59.962841 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cwwwg/must-gather-dhvsk"] Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.192870 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn"] Jan 26 14:45:00 crc kubenswrapper[4844]: E0126 14:45:00.193266 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1674b4f8-c352-44c3-a14a-f81e006c3586" containerName="gather" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.193281 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="1674b4f8-c352-44c3-a14a-f81e006c3586" containerName="gather" Jan 26 14:45:00 crc kubenswrapper[4844]: E0126 14:45:00.193303 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d45b0c-7e7f-4dd8-a8bc-84694255d656" containerName="registry-server" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.193310 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d45b0c-7e7f-4dd8-a8bc-84694255d656" containerName="registry-server" Jan 26 14:45:00 crc kubenswrapper[4844]: E0126 14:45:00.193327 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d45b0c-7e7f-4dd8-a8bc-84694255d656" containerName="extract-utilities" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.193334 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d45b0c-7e7f-4dd8-a8bc-84694255d656" containerName="extract-utilities" Jan 26 14:45:00 crc kubenswrapper[4844]: E0126 14:45:00.193351 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1674b4f8-c352-44c3-a14a-f81e006c3586" containerName="copy" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.193357 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="1674b4f8-c352-44c3-a14a-f81e006c3586" containerName="copy" Jan 26 14:45:00 crc kubenswrapper[4844]: E0126 14:45:00.193376 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d45b0c-7e7f-4dd8-a8bc-84694255d656" containerName="extract-content" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.193381 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d45b0c-7e7f-4dd8-a8bc-84694255d656" containerName="extract-content" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.193561 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="1674b4f8-c352-44c3-a14a-f81e006c3586" containerName="copy" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.193578 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="1674b4f8-c352-44c3-a14a-f81e006c3586" containerName="gather" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.193613 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d45b0c-7e7f-4dd8-a8bc-84694255d656" containerName="registry-server" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.194255 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.196873 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.197079 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.210389 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn"] Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.222259 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cwwwg_must-gather-dhvsk_1674b4f8-c352-44c3-a14a-f81e006c3586/copy/0.log" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.222576 4844 generic.go:334] "Generic (PLEG): container finished" podID="1674b4f8-c352-44c3-a14a-f81e006c3586" containerID="10405af140884f01e92023eb147986bc6696b13c12350b1eae03ca6376d1e90f" exitCode=143 Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.351060 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/833dfd1d-58e8-4e20-819d-7b0928a25740-secret-volume\") pod \"collect-profiles-29490645-kltxn\" (UID: \"833dfd1d-58e8-4e20-819d-7b0928a25740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.351333 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/833dfd1d-58e8-4e20-819d-7b0928a25740-config-volume\") pod \"collect-profiles-29490645-kltxn\" (UID: \"833dfd1d-58e8-4e20-819d-7b0928a25740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.351418 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvjxp\" (UniqueName: \"kubernetes.io/projected/833dfd1d-58e8-4e20-819d-7b0928a25740-kube-api-access-wvjxp\") pod \"collect-profiles-29490645-kltxn\" (UID: \"833dfd1d-58e8-4e20-819d-7b0928a25740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.453623 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvjxp\" (UniqueName: \"kubernetes.io/projected/833dfd1d-58e8-4e20-819d-7b0928a25740-kube-api-access-wvjxp\") pod \"collect-profiles-29490645-kltxn\" (UID: \"833dfd1d-58e8-4e20-819d-7b0928a25740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.453827 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/833dfd1d-58e8-4e20-819d-7b0928a25740-secret-volume\") pod \"collect-profiles-29490645-kltxn\" (UID: \"833dfd1d-58e8-4e20-819d-7b0928a25740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.453903 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/833dfd1d-58e8-4e20-819d-7b0928a25740-config-volume\") pod \"collect-profiles-29490645-kltxn\" (UID: \"833dfd1d-58e8-4e20-819d-7b0928a25740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.454965 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/833dfd1d-58e8-4e20-819d-7b0928a25740-config-volume\") pod \"collect-profiles-29490645-kltxn\" (UID: \"833dfd1d-58e8-4e20-819d-7b0928a25740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.460163 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/833dfd1d-58e8-4e20-819d-7b0928a25740-secret-volume\") pod \"collect-profiles-29490645-kltxn\" (UID: \"833dfd1d-58e8-4e20-819d-7b0928a25740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.469183 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvjxp\" (UniqueName: \"kubernetes.io/projected/833dfd1d-58e8-4e20-819d-7b0928a25740-kube-api-access-wvjxp\") pod \"collect-profiles-29490645-kltxn\" (UID: \"833dfd1d-58e8-4e20-819d-7b0928a25740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.516564 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.531156 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cwwwg_must-gather-dhvsk_1674b4f8-c352-44c3-a14a-f81e006c3586/copy/0.log" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.532026 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cwwwg/must-gather-dhvsk" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.658256 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1674b4f8-c352-44c3-a14a-f81e006c3586-must-gather-output\") pod \"1674b4f8-c352-44c3-a14a-f81e006c3586\" (UID: \"1674b4f8-c352-44c3-a14a-f81e006c3586\") " Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.658948 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhr8g\" (UniqueName: \"kubernetes.io/projected/1674b4f8-c352-44c3-a14a-f81e006c3586-kube-api-access-bhr8g\") pod \"1674b4f8-c352-44c3-a14a-f81e006c3586\" (UID: \"1674b4f8-c352-44c3-a14a-f81e006c3586\") " Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.666797 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1674b4f8-c352-44c3-a14a-f81e006c3586-kube-api-access-bhr8g" (OuterVolumeSpecName: "kube-api-access-bhr8g") pod "1674b4f8-c352-44c3-a14a-f81e006c3586" (UID: "1674b4f8-c352-44c3-a14a-f81e006c3586"). InnerVolumeSpecName "kube-api-access-bhr8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.762065 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhr8g\" (UniqueName: \"kubernetes.io/projected/1674b4f8-c352-44c3-a14a-f81e006c3586-kube-api-access-bhr8g\") on node \"crc\" DevicePath \"\"" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.873556 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1674b4f8-c352-44c3-a14a-f81e006c3586-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "1674b4f8-c352-44c3-a14a-f81e006c3586" (UID: "1674b4f8-c352-44c3-a14a-f81e006c3586"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.962011 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn"] Jan 26 14:45:00 crc kubenswrapper[4844]: I0126 14:45:00.966347 4844 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1674b4f8-c352-44c3-a14a-f81e006c3586-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 14:45:01 crc kubenswrapper[4844]: I0126 14:45:01.233725 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cwwwg_must-gather-dhvsk_1674b4f8-c352-44c3-a14a-f81e006c3586/copy/0.log" Jan 26 14:45:01 crc kubenswrapper[4844]: I0126 14:45:01.234719 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cwwwg/must-gather-dhvsk" Jan 26 14:45:01 crc kubenswrapper[4844]: I0126 14:45:01.234741 4844 scope.go:117] "RemoveContainer" containerID="10405af140884f01e92023eb147986bc6696b13c12350b1eae03ca6376d1e90f" Jan 26 14:45:01 crc kubenswrapper[4844]: I0126 14:45:01.236238 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn" event={"ID":"833dfd1d-58e8-4e20-819d-7b0928a25740","Type":"ContainerStarted","Data":"86890dc27c80dcab41cd2e8d50fb31b90073c432fc0e2924bc52519d57969074"} Jan 26 14:45:01 crc kubenswrapper[4844]: I0126 14:45:01.236290 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn" event={"ID":"833dfd1d-58e8-4e20-819d-7b0928a25740","Type":"ContainerStarted","Data":"8d74d475c9b8bccd2c8433f4e3062d9dbaaadb3386d382592ac5b00cf8d2ba01"} Jan 26 14:45:01 crc kubenswrapper[4844]: I0126 14:45:01.270746 4844 scope.go:117] "RemoveContainer" containerID="4a69a08b8984212e046f23d1d0ae2f908bde143d91582d552c7c9ea8404e9554" Jan 26 14:45:01 crc kubenswrapper[4844]: I0126 14:45:01.277985 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn" podStartSLOduration=1.277962038 podStartE2EDuration="1.277962038s" podCreationTimestamp="2026-01-26 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:45:01.269675386 +0000 UTC m=+7278.203043028" watchObservedRunningTime="2026-01-26 14:45:01.277962038 +0000 UTC m=+7278.211329670" Jan 26 14:45:01 crc kubenswrapper[4844]: I0126 14:45:01.327544 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1674b4f8-c352-44c3-a14a-f81e006c3586" path="/var/lib/kubelet/pods/1674b4f8-c352-44c3-a14a-f81e006c3586/volumes" Jan 26 14:45:02 crc kubenswrapper[4844]: I0126 14:45:02.249153 4844 generic.go:334] "Generic (PLEG): container finished" podID="833dfd1d-58e8-4e20-819d-7b0928a25740" containerID="86890dc27c80dcab41cd2e8d50fb31b90073c432fc0e2924bc52519d57969074" exitCode=0 Jan 26 14:45:02 crc kubenswrapper[4844]: I0126 14:45:02.249258 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn" event={"ID":"833dfd1d-58e8-4e20-819d-7b0928a25740","Type":"ContainerDied","Data":"86890dc27c80dcab41cd2e8d50fb31b90073c432fc0e2924bc52519d57969074"} Jan 26 14:45:03 crc kubenswrapper[4844]: I0126 14:45:03.638957 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn" Jan 26 14:45:03 crc kubenswrapper[4844]: I0126 14:45:03.721644 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvjxp\" (UniqueName: \"kubernetes.io/projected/833dfd1d-58e8-4e20-819d-7b0928a25740-kube-api-access-wvjxp\") pod \"833dfd1d-58e8-4e20-819d-7b0928a25740\" (UID: \"833dfd1d-58e8-4e20-819d-7b0928a25740\") " Jan 26 14:45:03 crc kubenswrapper[4844]: I0126 14:45:03.721752 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/833dfd1d-58e8-4e20-819d-7b0928a25740-secret-volume\") pod \"833dfd1d-58e8-4e20-819d-7b0928a25740\" (UID: \"833dfd1d-58e8-4e20-819d-7b0928a25740\") " Jan 26 14:45:03 crc kubenswrapper[4844]: I0126 14:45:03.721802 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/833dfd1d-58e8-4e20-819d-7b0928a25740-config-volume\") pod \"833dfd1d-58e8-4e20-819d-7b0928a25740\" (UID: \"833dfd1d-58e8-4e20-819d-7b0928a25740\") " Jan 26 14:45:03 crc kubenswrapper[4844]: I0126 14:45:03.722820 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/833dfd1d-58e8-4e20-819d-7b0928a25740-config-volume" (OuterVolumeSpecName: "config-volume") pod "833dfd1d-58e8-4e20-819d-7b0928a25740" (UID: "833dfd1d-58e8-4e20-819d-7b0928a25740"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:45:03 crc kubenswrapper[4844]: I0126 14:45:03.728361 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/833dfd1d-58e8-4e20-819d-7b0928a25740-kube-api-access-wvjxp" (OuterVolumeSpecName: "kube-api-access-wvjxp") pod "833dfd1d-58e8-4e20-819d-7b0928a25740" (UID: "833dfd1d-58e8-4e20-819d-7b0928a25740"). InnerVolumeSpecName "kube-api-access-wvjxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:45:03 crc kubenswrapper[4844]: I0126 14:45:03.728452 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833dfd1d-58e8-4e20-819d-7b0928a25740-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "833dfd1d-58e8-4e20-819d-7b0928a25740" (UID: "833dfd1d-58e8-4e20-819d-7b0928a25740"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:45:03 crc kubenswrapper[4844]: I0126 14:45:03.824558 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvjxp\" (UniqueName: \"kubernetes.io/projected/833dfd1d-58e8-4e20-819d-7b0928a25740-kube-api-access-wvjxp\") on node \"crc\" DevicePath \"\"" Jan 26 14:45:03 crc kubenswrapper[4844]: I0126 14:45:03.824848 4844 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/833dfd1d-58e8-4e20-819d-7b0928a25740-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:45:03 crc kubenswrapper[4844]: I0126 14:45:03.824862 4844 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/833dfd1d-58e8-4e20-819d-7b0928a25740-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:45:04 crc kubenswrapper[4844]: I0126 14:45:04.269572 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn" event={"ID":"833dfd1d-58e8-4e20-819d-7b0928a25740","Type":"ContainerDied","Data":"8d74d475c9b8bccd2c8433f4e3062d9dbaaadb3386d382592ac5b00cf8d2ba01"} Jan 26 14:45:04 crc kubenswrapper[4844]: I0126 14:45:04.269644 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d74d475c9b8bccd2c8433f4e3062d9dbaaadb3386d382592ac5b00cf8d2ba01" Jan 26 14:45:04 crc kubenswrapper[4844]: I0126 14:45:04.269727 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-kltxn" Jan 26 14:45:04 crc kubenswrapper[4844]: I0126 14:45:04.707861 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9"] Jan 26 14:45:04 crc kubenswrapper[4844]: I0126 14:45:04.716581 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490600-b9nq9"] Jan 26 14:45:05 crc kubenswrapper[4844]: I0126 14:45:05.323393 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60fa6053-be49-467a-9c66-92823955a811" path="/var/lib/kubelet/pods/60fa6053-be49-467a-9c66-92823955a811/volumes" Jan 26 14:45:12 crc kubenswrapper[4844]: I0126 14:45:12.314654 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:45:12 crc kubenswrapper[4844]: E0126 14:45:12.315870 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:45:24 crc kubenswrapper[4844]: I0126 14:45:24.313312 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:45:24 crc kubenswrapper[4844]: E0126 14:45:24.314287 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:45:37 crc kubenswrapper[4844]: I0126 14:45:37.314392 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:45:37 crc kubenswrapper[4844]: E0126 14:45:37.315777 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:45:49 crc kubenswrapper[4844]: I0126 14:45:49.333233 4844 scope.go:117] "RemoveContainer" containerID="90a6b279aa5e19da593518440b2b3ac34fe08ef95d1b389c379e2c1ef94a8bc0" Jan 26 14:45:49 crc kubenswrapper[4844]: I0126 14:45:49.359323 4844 scope.go:117] "RemoveContainer" containerID="d03cf587c05f4a93fc2fc7353d6de8c19326bb8bd2866dc91035415f7c551812" Jan 26 14:45:50 crc kubenswrapper[4844]: I0126 14:45:50.313454 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:45:50 crc kubenswrapper[4844]: E0126 14:45:50.314014 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:46:01 crc kubenswrapper[4844]: I0126 14:46:01.313283 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:46:01 crc kubenswrapper[4844]: E0126 14:46:01.315363 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:46:13 crc kubenswrapper[4844]: I0126 14:46:13.319155 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:46:13 crc kubenswrapper[4844]: E0126 14:46:13.319970 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:46:24 crc kubenswrapper[4844]: I0126 14:46:24.314683 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:46:24 crc kubenswrapper[4844]: E0126 14:46:24.317501 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:46:31 crc kubenswrapper[4844]: I0126 14:46:31.089078 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vp6xp"] Jan 26 14:46:31 crc kubenswrapper[4844]: E0126 14:46:31.091081 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="833dfd1d-58e8-4e20-819d-7b0928a25740" containerName="collect-profiles" Jan 26 14:46:31 crc kubenswrapper[4844]: I0126 14:46:31.091166 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="833dfd1d-58e8-4e20-819d-7b0928a25740" containerName="collect-profiles" Jan 26 14:46:31 crc kubenswrapper[4844]: I0126 14:46:31.091432 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="833dfd1d-58e8-4e20-819d-7b0928a25740" containerName="collect-profiles" Jan 26 14:46:31 crc kubenswrapper[4844]: I0126 14:46:31.092945 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vp6xp" Jan 26 14:46:31 crc kubenswrapper[4844]: I0126 14:46:31.100949 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vp6xp"] Jan 26 14:46:31 crc kubenswrapper[4844]: I0126 14:46:31.120706 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa1f075-38dc-41d6-a662-40c16c29ec51-catalog-content\") pod \"community-operators-vp6xp\" (UID: \"2aa1f075-38dc-41d6-a662-40c16c29ec51\") " pod="openshift-marketplace/community-operators-vp6xp" Jan 26 14:46:31 crc kubenswrapper[4844]: I0126 14:46:31.120869 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa1f075-38dc-41d6-a662-40c16c29ec51-utilities\") pod \"community-operators-vp6xp\" (UID: \"2aa1f075-38dc-41d6-a662-40c16c29ec51\") " pod="openshift-marketplace/community-operators-vp6xp" Jan 26 14:46:31 crc kubenswrapper[4844]: I0126 14:46:31.121025 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-574kj\" (UniqueName: \"kubernetes.io/projected/2aa1f075-38dc-41d6-a662-40c16c29ec51-kube-api-access-574kj\") pod \"community-operators-vp6xp\" (UID: \"2aa1f075-38dc-41d6-a662-40c16c29ec51\") " pod="openshift-marketplace/community-operators-vp6xp" Jan 26 14:46:31 crc kubenswrapper[4844]: I0126 14:46:31.223748 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-574kj\" (UniqueName: \"kubernetes.io/projected/2aa1f075-38dc-41d6-a662-40c16c29ec51-kube-api-access-574kj\") pod \"community-operators-vp6xp\" (UID: \"2aa1f075-38dc-41d6-a662-40c16c29ec51\") " pod="openshift-marketplace/community-operators-vp6xp" Jan 26 14:46:31 crc kubenswrapper[4844]: I0126 14:46:31.223895 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa1f075-38dc-41d6-a662-40c16c29ec51-catalog-content\") pod \"community-operators-vp6xp\" (UID: \"2aa1f075-38dc-41d6-a662-40c16c29ec51\") " pod="openshift-marketplace/community-operators-vp6xp" Jan 26 14:46:31 crc kubenswrapper[4844]: I0126 14:46:31.224016 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa1f075-38dc-41d6-a662-40c16c29ec51-utilities\") pod \"community-operators-vp6xp\" (UID: \"2aa1f075-38dc-41d6-a662-40c16c29ec51\") " pod="openshift-marketplace/community-operators-vp6xp" Jan 26 14:46:31 crc kubenswrapper[4844]: I0126 14:46:31.224571 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa1f075-38dc-41d6-a662-40c16c29ec51-catalog-content\") pod \"community-operators-vp6xp\" (UID: \"2aa1f075-38dc-41d6-a662-40c16c29ec51\") " pod="openshift-marketplace/community-operators-vp6xp" Jan 26 14:46:31 crc kubenswrapper[4844]: I0126 14:46:31.224628 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa1f075-38dc-41d6-a662-40c16c29ec51-utilities\") pod \"community-operators-vp6xp\" (UID: \"2aa1f075-38dc-41d6-a662-40c16c29ec51\") " pod="openshift-marketplace/community-operators-vp6xp" Jan 26 14:46:31 crc kubenswrapper[4844]: I0126 14:46:31.243653 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-574kj\" (UniqueName: \"kubernetes.io/projected/2aa1f075-38dc-41d6-a662-40c16c29ec51-kube-api-access-574kj\") pod \"community-operators-vp6xp\" (UID: \"2aa1f075-38dc-41d6-a662-40c16c29ec51\") " pod="openshift-marketplace/community-operators-vp6xp" Jan 26 14:46:31 crc kubenswrapper[4844]: I0126 14:46:31.420659 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vp6xp" Jan 26 14:46:31 crc kubenswrapper[4844]: I0126 14:46:31.976847 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vp6xp"] Jan 26 14:46:32 crc kubenswrapper[4844]: I0126 14:46:32.181386 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp6xp" event={"ID":"2aa1f075-38dc-41d6-a662-40c16c29ec51","Type":"ContainerStarted","Data":"5ed73ff9f54332c21668d58ed108eafb004e06cd1062b3bdfa0283be75a4cefc"} Jan 26 14:46:33 crc kubenswrapper[4844]: I0126 14:46:33.191497 4844 generic.go:334] "Generic (PLEG): container finished" podID="2aa1f075-38dc-41d6-a662-40c16c29ec51" containerID="2d0aaaf875f9d51bb5c68cf21c197fdd73b90c43e9b947570d22142b7ddcea45" exitCode=0 Jan 26 14:46:33 crc kubenswrapper[4844]: I0126 14:46:33.191583 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp6xp" event={"ID":"2aa1f075-38dc-41d6-a662-40c16c29ec51","Type":"ContainerDied","Data":"2d0aaaf875f9d51bb5c68cf21c197fdd73b90c43e9b947570d22142b7ddcea45"} Jan 26 14:46:34 crc kubenswrapper[4844]: I0126 14:46:34.200997 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp6xp" event={"ID":"2aa1f075-38dc-41d6-a662-40c16c29ec51","Type":"ContainerStarted","Data":"08cabbc763920fddefc7cfd0dcbb5f11cda5202cbdd6a215deafd32998db2bba"} Jan 26 14:46:35 crc kubenswrapper[4844]: I0126 14:46:35.214626 4844 generic.go:334] "Generic (PLEG): container finished" podID="2aa1f075-38dc-41d6-a662-40c16c29ec51" containerID="08cabbc763920fddefc7cfd0dcbb5f11cda5202cbdd6a215deafd32998db2bba" exitCode=0 Jan 26 14:46:35 crc kubenswrapper[4844]: I0126 14:46:35.214986 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp6xp" event={"ID":"2aa1f075-38dc-41d6-a662-40c16c29ec51","Type":"ContainerDied","Data":"08cabbc763920fddefc7cfd0dcbb5f11cda5202cbdd6a215deafd32998db2bba"} Jan 26 14:46:36 crc kubenswrapper[4844]: I0126 14:46:36.227456 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp6xp" event={"ID":"2aa1f075-38dc-41d6-a662-40c16c29ec51","Type":"ContainerStarted","Data":"ace6878ddc78717cad52839d699690f3a23d0ed8af371395418245fdc642ab92"} Jan 26 14:46:36 crc kubenswrapper[4844]: I0126 14:46:36.252898 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vp6xp" podStartSLOduration=2.456442347 podStartE2EDuration="5.252870056s" podCreationTimestamp="2026-01-26 14:46:31 +0000 UTC" firstStartedPulling="2026-01-26 14:46:33.193959289 +0000 UTC m=+7370.127326901" lastFinishedPulling="2026-01-26 14:46:35.990387008 +0000 UTC m=+7372.923754610" observedRunningTime="2026-01-26 14:46:36.244846321 +0000 UTC m=+7373.178213933" watchObservedRunningTime="2026-01-26 14:46:36.252870056 +0000 UTC m=+7373.186237668" Jan 26 14:46:38 crc kubenswrapper[4844]: I0126 14:46:38.313101 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:46:38 crc kubenswrapper[4844]: E0126 14:46:38.313832 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:46:41 crc kubenswrapper[4844]: I0126 14:46:41.421421 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vp6xp" Jan 26 14:46:41 crc kubenswrapper[4844]: I0126 14:46:41.422304 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vp6xp" Jan 26 14:46:41 crc kubenswrapper[4844]: I0126 14:46:41.478118 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vp6xp" Jan 26 14:46:42 crc kubenswrapper[4844]: I0126 14:46:42.394868 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vp6xp" Jan 26 14:46:42 crc kubenswrapper[4844]: I0126 14:46:42.455932 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vp6xp"] Jan 26 14:46:44 crc kubenswrapper[4844]: I0126 14:46:44.353697 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vp6xp" podUID="2aa1f075-38dc-41d6-a662-40c16c29ec51" containerName="registry-server" containerID="cri-o://ace6878ddc78717cad52839d699690f3a23d0ed8af371395418245fdc642ab92" gracePeriod=2 Jan 26 14:46:44 crc kubenswrapper[4844]: I0126 14:46:44.883354 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vp6xp" Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.031305 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa1f075-38dc-41d6-a662-40c16c29ec51-utilities\") pod \"2aa1f075-38dc-41d6-a662-40c16c29ec51\" (UID: \"2aa1f075-38dc-41d6-a662-40c16c29ec51\") " Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.031412 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-574kj\" (UniqueName: \"kubernetes.io/projected/2aa1f075-38dc-41d6-a662-40c16c29ec51-kube-api-access-574kj\") pod \"2aa1f075-38dc-41d6-a662-40c16c29ec51\" (UID: \"2aa1f075-38dc-41d6-a662-40c16c29ec51\") " Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.031458 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa1f075-38dc-41d6-a662-40c16c29ec51-catalog-content\") pod \"2aa1f075-38dc-41d6-a662-40c16c29ec51\" (UID: \"2aa1f075-38dc-41d6-a662-40c16c29ec51\") " Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.032387 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa1f075-38dc-41d6-a662-40c16c29ec51-utilities" (OuterVolumeSpecName: "utilities") pod "2aa1f075-38dc-41d6-a662-40c16c29ec51" (UID: "2aa1f075-38dc-41d6-a662-40c16c29ec51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.037907 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa1f075-38dc-41d6-a662-40c16c29ec51-kube-api-access-574kj" (OuterVolumeSpecName: "kube-api-access-574kj") pod "2aa1f075-38dc-41d6-a662-40c16c29ec51" (UID: "2aa1f075-38dc-41d6-a662-40c16c29ec51"). InnerVolumeSpecName "kube-api-access-574kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.087915 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa1f075-38dc-41d6-a662-40c16c29ec51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2aa1f075-38dc-41d6-a662-40c16c29ec51" (UID: "2aa1f075-38dc-41d6-a662-40c16c29ec51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.134260 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa1f075-38dc-41d6-a662-40c16c29ec51-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.134311 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-574kj\" (UniqueName: \"kubernetes.io/projected/2aa1f075-38dc-41d6-a662-40c16c29ec51-kube-api-access-574kj\") on node \"crc\" DevicePath \"\"" Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.134330 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa1f075-38dc-41d6-a662-40c16c29ec51-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.369407 4844 generic.go:334] "Generic (PLEG): container finished" podID="2aa1f075-38dc-41d6-a662-40c16c29ec51" containerID="ace6878ddc78717cad52839d699690f3a23d0ed8af371395418245fdc642ab92" exitCode=0 Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.369632 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vp6xp" Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.369593 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp6xp" event={"ID":"2aa1f075-38dc-41d6-a662-40c16c29ec51","Type":"ContainerDied","Data":"ace6878ddc78717cad52839d699690f3a23d0ed8af371395418245fdc642ab92"} Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.370026 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp6xp" event={"ID":"2aa1f075-38dc-41d6-a662-40c16c29ec51","Type":"ContainerDied","Data":"5ed73ff9f54332c21668d58ed108eafb004e06cd1062b3bdfa0283be75a4cefc"} Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.370093 4844 scope.go:117] "RemoveContainer" containerID="ace6878ddc78717cad52839d699690f3a23d0ed8af371395418245fdc642ab92" Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.399377 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vp6xp"] Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.400392 4844 scope.go:117] "RemoveContainer" containerID="08cabbc763920fddefc7cfd0dcbb5f11cda5202cbdd6a215deafd32998db2bba" Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.409374 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vp6xp"] Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.427237 4844 scope.go:117] "RemoveContainer" containerID="2d0aaaf875f9d51bb5c68cf21c197fdd73b90c43e9b947570d22142b7ddcea45" Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.465268 4844 scope.go:117] "RemoveContainer" containerID="ace6878ddc78717cad52839d699690f3a23d0ed8af371395418245fdc642ab92" Jan 26 14:46:45 crc kubenswrapper[4844]: E0126 14:46:45.465745 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace6878ddc78717cad52839d699690f3a23d0ed8af371395418245fdc642ab92\": container with ID starting with ace6878ddc78717cad52839d699690f3a23d0ed8af371395418245fdc642ab92 not found: ID does not exist" containerID="ace6878ddc78717cad52839d699690f3a23d0ed8af371395418245fdc642ab92" Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.465796 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ace6878ddc78717cad52839d699690f3a23d0ed8af371395418245fdc642ab92"} err="failed to get container status \"ace6878ddc78717cad52839d699690f3a23d0ed8af371395418245fdc642ab92\": rpc error: code = NotFound desc = could not find container \"ace6878ddc78717cad52839d699690f3a23d0ed8af371395418245fdc642ab92\": container with ID starting with ace6878ddc78717cad52839d699690f3a23d0ed8af371395418245fdc642ab92 not found: ID does not exist" Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.465828 4844 scope.go:117] "RemoveContainer" containerID="08cabbc763920fddefc7cfd0dcbb5f11cda5202cbdd6a215deafd32998db2bba" Jan 26 14:46:45 crc kubenswrapper[4844]: E0126 14:46:45.466260 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08cabbc763920fddefc7cfd0dcbb5f11cda5202cbdd6a215deafd32998db2bba\": container with ID starting with 08cabbc763920fddefc7cfd0dcbb5f11cda5202cbdd6a215deafd32998db2bba not found: ID does not exist" containerID="08cabbc763920fddefc7cfd0dcbb5f11cda5202cbdd6a215deafd32998db2bba" Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.466300 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08cabbc763920fddefc7cfd0dcbb5f11cda5202cbdd6a215deafd32998db2bba"} err="failed to get container status \"08cabbc763920fddefc7cfd0dcbb5f11cda5202cbdd6a215deafd32998db2bba\": rpc error: code = NotFound desc = could not find container \"08cabbc763920fddefc7cfd0dcbb5f11cda5202cbdd6a215deafd32998db2bba\": container with ID starting with 08cabbc763920fddefc7cfd0dcbb5f11cda5202cbdd6a215deafd32998db2bba not found: ID does not exist" Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.466326 4844 scope.go:117] "RemoveContainer" containerID="2d0aaaf875f9d51bb5c68cf21c197fdd73b90c43e9b947570d22142b7ddcea45" Jan 26 14:46:45 crc kubenswrapper[4844]: E0126 14:46:45.466559 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d0aaaf875f9d51bb5c68cf21c197fdd73b90c43e9b947570d22142b7ddcea45\": container with ID starting with 2d0aaaf875f9d51bb5c68cf21c197fdd73b90c43e9b947570d22142b7ddcea45 not found: ID does not exist" containerID="2d0aaaf875f9d51bb5c68cf21c197fdd73b90c43e9b947570d22142b7ddcea45" Jan 26 14:46:45 crc kubenswrapper[4844]: I0126 14:46:45.466615 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d0aaaf875f9d51bb5c68cf21c197fdd73b90c43e9b947570d22142b7ddcea45"} err="failed to get container status \"2d0aaaf875f9d51bb5c68cf21c197fdd73b90c43e9b947570d22142b7ddcea45\": rpc error: code = NotFound desc = could not find container \"2d0aaaf875f9d51bb5c68cf21c197fdd73b90c43e9b947570d22142b7ddcea45\": container with ID starting with 2d0aaaf875f9d51bb5c68cf21c197fdd73b90c43e9b947570d22142b7ddcea45 not found: ID does not exist" Jan 26 14:46:47 crc kubenswrapper[4844]: I0126 14:46:47.325092 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aa1f075-38dc-41d6-a662-40c16c29ec51" path="/var/lib/kubelet/pods/2aa1f075-38dc-41d6-a662-40c16c29ec51/volumes" Jan 26 14:46:52 crc kubenswrapper[4844]: I0126 14:46:52.313978 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:46:52 crc kubenswrapper[4844]: E0126 14:46:52.315196 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:47:07 crc kubenswrapper[4844]: I0126 14:47:07.321437 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:47:07 crc kubenswrapper[4844]: E0126 14:47:07.322301 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:47:13 crc kubenswrapper[4844]: I0126 14:47:13.810791 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-26djn"] Jan 26 14:47:13 crc kubenswrapper[4844]: E0126 14:47:13.812078 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa1f075-38dc-41d6-a662-40c16c29ec51" containerName="extract-utilities" Jan 26 14:47:13 crc kubenswrapper[4844]: I0126 14:47:13.812104 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa1f075-38dc-41d6-a662-40c16c29ec51" containerName="extract-utilities" Jan 26 14:47:13 crc kubenswrapper[4844]: E0126 14:47:13.812133 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa1f075-38dc-41d6-a662-40c16c29ec51" containerName="extract-content" Jan 26 14:47:13 crc kubenswrapper[4844]: I0126 14:47:13.812144 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa1f075-38dc-41d6-a662-40c16c29ec51" containerName="extract-content" Jan 26 14:47:13 crc kubenswrapper[4844]: E0126 14:47:13.812206 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa1f075-38dc-41d6-a662-40c16c29ec51" containerName="registry-server" Jan 26 14:47:13 crc kubenswrapper[4844]: I0126 14:47:13.812219 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa1f075-38dc-41d6-a662-40c16c29ec51" containerName="registry-server" Jan 26 14:47:13 crc kubenswrapper[4844]: I0126 14:47:13.812557 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa1f075-38dc-41d6-a662-40c16c29ec51" containerName="registry-server" Jan 26 14:47:13 crc kubenswrapper[4844]: I0126 14:47:13.814990 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-26djn" Jan 26 14:47:13 crc kubenswrapper[4844]: I0126 14:47:13.849597 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-26djn"] Jan 26 14:47:13 crc kubenswrapper[4844]: I0126 14:47:13.903526 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx2rv\" (UniqueName: \"kubernetes.io/projected/74a8fe2c-290a-49e5-9e9b-a948d48fbef9-kube-api-access-gx2rv\") pod \"certified-operators-26djn\" (UID: \"74a8fe2c-290a-49e5-9e9b-a948d48fbef9\") " pod="openshift-marketplace/certified-operators-26djn" Jan 26 14:47:13 crc kubenswrapper[4844]: I0126 14:47:13.903678 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a8fe2c-290a-49e5-9e9b-a948d48fbef9-utilities\") pod \"certified-operators-26djn\" (UID: \"74a8fe2c-290a-49e5-9e9b-a948d48fbef9\") " pod="openshift-marketplace/certified-operators-26djn" Jan 26 14:47:13 crc kubenswrapper[4844]: I0126 14:47:13.903768 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a8fe2c-290a-49e5-9e9b-a948d48fbef9-catalog-content\") pod \"certified-operators-26djn\" (UID: \"74a8fe2c-290a-49e5-9e9b-a948d48fbef9\") " pod="openshift-marketplace/certified-operators-26djn" Jan 26 14:47:14 crc kubenswrapper[4844]: I0126 14:47:14.005405 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx2rv\" (UniqueName: \"kubernetes.io/projected/74a8fe2c-290a-49e5-9e9b-a948d48fbef9-kube-api-access-gx2rv\") pod \"certified-operators-26djn\" (UID: \"74a8fe2c-290a-49e5-9e9b-a948d48fbef9\") " pod="openshift-marketplace/certified-operators-26djn" Jan 26 14:47:14 crc kubenswrapper[4844]: I0126 14:47:14.005514 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a8fe2c-290a-49e5-9e9b-a948d48fbef9-utilities\") pod \"certified-operators-26djn\" (UID: \"74a8fe2c-290a-49e5-9e9b-a948d48fbef9\") " pod="openshift-marketplace/certified-operators-26djn" Jan 26 14:47:14 crc kubenswrapper[4844]: I0126 14:47:14.005665 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a8fe2c-290a-49e5-9e9b-a948d48fbef9-catalog-content\") pod \"certified-operators-26djn\" (UID: \"74a8fe2c-290a-49e5-9e9b-a948d48fbef9\") " pod="openshift-marketplace/certified-operators-26djn" Jan 26 14:47:14 crc kubenswrapper[4844]: I0126 14:47:14.006294 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a8fe2c-290a-49e5-9e9b-a948d48fbef9-catalog-content\") pod \"certified-operators-26djn\" (UID: \"74a8fe2c-290a-49e5-9e9b-a948d48fbef9\") " pod="openshift-marketplace/certified-operators-26djn" Jan 26 14:47:14 crc kubenswrapper[4844]: I0126 14:47:14.006313 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a8fe2c-290a-49e5-9e9b-a948d48fbef9-utilities\") pod \"certified-operators-26djn\" (UID: \"74a8fe2c-290a-49e5-9e9b-a948d48fbef9\") " pod="openshift-marketplace/certified-operators-26djn" Jan 26 14:47:14 crc kubenswrapper[4844]: I0126 14:47:14.042795 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx2rv\" (UniqueName: \"kubernetes.io/projected/74a8fe2c-290a-49e5-9e9b-a948d48fbef9-kube-api-access-gx2rv\") pod \"certified-operators-26djn\" (UID: \"74a8fe2c-290a-49e5-9e9b-a948d48fbef9\") " pod="openshift-marketplace/certified-operators-26djn" Jan 26 14:47:14 crc kubenswrapper[4844]: I0126 14:47:14.150807 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-26djn" Jan 26 14:47:14 crc kubenswrapper[4844]: I0126 14:47:14.676223 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-26djn"] Jan 26 14:47:14 crc kubenswrapper[4844]: I0126 14:47:14.717302 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26djn" event={"ID":"74a8fe2c-290a-49e5-9e9b-a948d48fbef9","Type":"ContainerStarted","Data":"98fd8e8309bb506e8f48b801db407bc5571dc4acf54ce009ced48c7f2bd51415"} Jan 26 14:47:15 crc kubenswrapper[4844]: I0126 14:47:15.729482 4844 generic.go:334] "Generic (PLEG): container finished" podID="74a8fe2c-290a-49e5-9e9b-a948d48fbef9" containerID="fe7763f236f2285bf969dc43d8f4d81e38dd250c77f2715c680cfdf1f5078a1f" exitCode=0 Jan 26 14:47:15 crc kubenswrapper[4844]: I0126 14:47:15.729548 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26djn" event={"ID":"74a8fe2c-290a-49e5-9e9b-a948d48fbef9","Type":"ContainerDied","Data":"fe7763f236f2285bf969dc43d8f4d81e38dd250c77f2715c680cfdf1f5078a1f"} Jan 26 14:47:16 crc kubenswrapper[4844]: I0126 14:47:16.741343 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26djn" event={"ID":"74a8fe2c-290a-49e5-9e9b-a948d48fbef9","Type":"ContainerStarted","Data":"7f9e619229efe999e4412f9bf58ae0aea00600f04fcbd637bf3f62a9e8d78d98"} Jan 26 14:47:17 crc kubenswrapper[4844]: E0126 14:47:17.366443 4844 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74a8fe2c_290a_49e5_9e9b_a948d48fbef9.slice/crio-conmon-7f9e619229efe999e4412f9bf58ae0aea00600f04fcbd637bf3f62a9e8d78d98.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74a8fe2c_290a_49e5_9e9b_a948d48fbef9.slice/crio-7f9e619229efe999e4412f9bf58ae0aea00600f04fcbd637bf3f62a9e8d78d98.scope\": RecentStats: unable to find data in memory cache]" Jan 26 14:47:17 crc kubenswrapper[4844]: I0126 14:47:17.754664 4844 generic.go:334] "Generic (PLEG): container finished" podID="74a8fe2c-290a-49e5-9e9b-a948d48fbef9" containerID="7f9e619229efe999e4412f9bf58ae0aea00600f04fcbd637bf3f62a9e8d78d98" exitCode=0 Jan 26 14:47:17 crc kubenswrapper[4844]: I0126 14:47:17.754762 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26djn" event={"ID":"74a8fe2c-290a-49e5-9e9b-a948d48fbef9","Type":"ContainerDied","Data":"7f9e619229efe999e4412f9bf58ae0aea00600f04fcbd637bf3f62a9e8d78d98"} Jan 26 14:47:18 crc kubenswrapper[4844]: I0126 14:47:18.768785 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26djn" event={"ID":"74a8fe2c-290a-49e5-9e9b-a948d48fbef9","Type":"ContainerStarted","Data":"8a4b05e2d7f151105a302e207559a640726d3c644dd90701d78b33a88ea75759"} Jan 26 14:47:18 crc kubenswrapper[4844]: I0126 14:47:18.799701 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-26djn" podStartSLOduration=3.1982819239999998 podStartE2EDuration="5.799583472s" podCreationTimestamp="2026-01-26 14:47:13 +0000 UTC" firstStartedPulling="2026-01-26 14:47:15.732826725 +0000 UTC m=+7412.666194327" lastFinishedPulling="2026-01-26 14:47:18.334128233 +0000 UTC m=+7415.267495875" observedRunningTime="2026-01-26 14:47:18.791172248 +0000 UTC m=+7415.724539870" watchObservedRunningTime="2026-01-26 14:47:18.799583472 +0000 UTC m=+7415.732951084" Jan 26 14:47:19 crc kubenswrapper[4844]: I0126 14:47:19.313246 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:47:19 crc kubenswrapper[4844]: E0126 14:47:19.313559 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:47:24 crc kubenswrapper[4844]: I0126 14:47:24.152011 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-26djn" Jan 26 14:47:24 crc kubenswrapper[4844]: I0126 14:47:24.152574 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-26djn" Jan 26 14:47:24 crc kubenswrapper[4844]: I0126 14:47:24.210005 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-26djn" Jan 26 14:47:24 crc kubenswrapper[4844]: I0126 14:47:24.892056 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-26djn" Jan 26 14:47:24 crc kubenswrapper[4844]: I0126 14:47:24.967297 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-26djn"] Jan 26 14:47:26 crc kubenswrapper[4844]: I0126 14:47:26.852782 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-26djn" podUID="74a8fe2c-290a-49e5-9e9b-a948d48fbef9" containerName="registry-server" containerID="cri-o://8a4b05e2d7f151105a302e207559a640726d3c644dd90701d78b33a88ea75759" gracePeriod=2 Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.384580 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-26djn" Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.520690 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a8fe2c-290a-49e5-9e9b-a948d48fbef9-catalog-content\") pod \"74a8fe2c-290a-49e5-9e9b-a948d48fbef9\" (UID: \"74a8fe2c-290a-49e5-9e9b-a948d48fbef9\") " Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.521033 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx2rv\" (UniqueName: \"kubernetes.io/projected/74a8fe2c-290a-49e5-9e9b-a948d48fbef9-kube-api-access-gx2rv\") pod \"74a8fe2c-290a-49e5-9e9b-a948d48fbef9\" (UID: \"74a8fe2c-290a-49e5-9e9b-a948d48fbef9\") " Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.521239 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a8fe2c-290a-49e5-9e9b-a948d48fbef9-utilities\") pod \"74a8fe2c-290a-49e5-9e9b-a948d48fbef9\" (UID: \"74a8fe2c-290a-49e5-9e9b-a948d48fbef9\") " Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.522491 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74a8fe2c-290a-49e5-9e9b-a948d48fbef9-utilities" (OuterVolumeSpecName: "utilities") pod "74a8fe2c-290a-49e5-9e9b-a948d48fbef9" (UID: "74a8fe2c-290a-49e5-9e9b-a948d48fbef9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.527671 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a8fe2c-290a-49e5-9e9b-a948d48fbef9-kube-api-access-gx2rv" (OuterVolumeSpecName: "kube-api-access-gx2rv") pod "74a8fe2c-290a-49e5-9e9b-a948d48fbef9" (UID: "74a8fe2c-290a-49e5-9e9b-a948d48fbef9"). InnerVolumeSpecName "kube-api-access-gx2rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.528654 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a8fe2c-290a-49e5-9e9b-a948d48fbef9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.588324 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74a8fe2c-290a-49e5-9e9b-a948d48fbef9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74a8fe2c-290a-49e5-9e9b-a948d48fbef9" (UID: "74a8fe2c-290a-49e5-9e9b-a948d48fbef9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.630920 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx2rv\" (UniqueName: \"kubernetes.io/projected/74a8fe2c-290a-49e5-9e9b-a948d48fbef9-kube-api-access-gx2rv\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.630951 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a8fe2c-290a-49e5-9e9b-a948d48fbef9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.873564 4844 generic.go:334] "Generic (PLEG): container finished" podID="74a8fe2c-290a-49e5-9e9b-a948d48fbef9" containerID="8a4b05e2d7f151105a302e207559a640726d3c644dd90701d78b33a88ea75759" exitCode=0 Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.873688 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-26djn" Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.873680 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26djn" event={"ID":"74a8fe2c-290a-49e5-9e9b-a948d48fbef9","Type":"ContainerDied","Data":"8a4b05e2d7f151105a302e207559a640726d3c644dd90701d78b33a88ea75759"} Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.874325 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26djn" event={"ID":"74a8fe2c-290a-49e5-9e9b-a948d48fbef9","Type":"ContainerDied","Data":"98fd8e8309bb506e8f48b801db407bc5571dc4acf54ce009ced48c7f2bd51415"} Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.874363 4844 scope.go:117] "RemoveContainer" containerID="8a4b05e2d7f151105a302e207559a640726d3c644dd90701d78b33a88ea75759" Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.909672 4844 scope.go:117] "RemoveContainer" containerID="7f9e619229efe999e4412f9bf58ae0aea00600f04fcbd637bf3f62a9e8d78d98" Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.950829 4844 scope.go:117] "RemoveContainer" containerID="fe7763f236f2285bf969dc43d8f4d81e38dd250c77f2715c680cfdf1f5078a1f" Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.953934 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-26djn"] Jan 26 14:47:27 crc kubenswrapper[4844]: I0126 14:47:27.966350 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-26djn"] Jan 26 14:47:28 crc kubenswrapper[4844]: I0126 14:47:28.011032 4844 scope.go:117] "RemoveContainer" containerID="8a4b05e2d7f151105a302e207559a640726d3c644dd90701d78b33a88ea75759" Jan 26 14:47:28 crc kubenswrapper[4844]: E0126 14:47:28.011668 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a4b05e2d7f151105a302e207559a640726d3c644dd90701d78b33a88ea75759\": container with ID starting with 8a4b05e2d7f151105a302e207559a640726d3c644dd90701d78b33a88ea75759 not found: ID does not exist" containerID="8a4b05e2d7f151105a302e207559a640726d3c644dd90701d78b33a88ea75759" Jan 26 14:47:28 crc kubenswrapper[4844]: I0126 14:47:28.011705 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a4b05e2d7f151105a302e207559a640726d3c644dd90701d78b33a88ea75759"} err="failed to get container status \"8a4b05e2d7f151105a302e207559a640726d3c644dd90701d78b33a88ea75759\": rpc error: code = NotFound desc = could not find container \"8a4b05e2d7f151105a302e207559a640726d3c644dd90701d78b33a88ea75759\": container with ID starting with 8a4b05e2d7f151105a302e207559a640726d3c644dd90701d78b33a88ea75759 not found: ID does not exist" Jan 26 14:47:28 crc kubenswrapper[4844]: I0126 14:47:28.011729 4844 scope.go:117] "RemoveContainer" containerID="7f9e619229efe999e4412f9bf58ae0aea00600f04fcbd637bf3f62a9e8d78d98" Jan 26 14:47:28 crc kubenswrapper[4844]: E0126 14:47:28.012016 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f9e619229efe999e4412f9bf58ae0aea00600f04fcbd637bf3f62a9e8d78d98\": container with ID starting with 7f9e619229efe999e4412f9bf58ae0aea00600f04fcbd637bf3f62a9e8d78d98 not found: ID does not exist" containerID="7f9e619229efe999e4412f9bf58ae0aea00600f04fcbd637bf3f62a9e8d78d98" Jan 26 14:47:28 crc kubenswrapper[4844]: I0126 14:47:28.012044 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f9e619229efe999e4412f9bf58ae0aea00600f04fcbd637bf3f62a9e8d78d98"} err="failed to get container status \"7f9e619229efe999e4412f9bf58ae0aea00600f04fcbd637bf3f62a9e8d78d98\": rpc error: code = NotFound desc = could not find container \"7f9e619229efe999e4412f9bf58ae0aea00600f04fcbd637bf3f62a9e8d78d98\": container with ID starting with 7f9e619229efe999e4412f9bf58ae0aea00600f04fcbd637bf3f62a9e8d78d98 not found: ID does not exist" Jan 26 14:47:28 crc kubenswrapper[4844]: I0126 14:47:28.012062 4844 scope.go:117] "RemoveContainer" containerID="fe7763f236f2285bf969dc43d8f4d81e38dd250c77f2715c680cfdf1f5078a1f" Jan 26 14:47:28 crc kubenswrapper[4844]: E0126 14:47:28.012265 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe7763f236f2285bf969dc43d8f4d81e38dd250c77f2715c680cfdf1f5078a1f\": container with ID starting with fe7763f236f2285bf969dc43d8f4d81e38dd250c77f2715c680cfdf1f5078a1f not found: ID does not exist" containerID="fe7763f236f2285bf969dc43d8f4d81e38dd250c77f2715c680cfdf1f5078a1f" Jan 26 14:47:28 crc kubenswrapper[4844]: I0126 14:47:28.012295 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe7763f236f2285bf969dc43d8f4d81e38dd250c77f2715c680cfdf1f5078a1f"} err="failed to get container status \"fe7763f236f2285bf969dc43d8f4d81e38dd250c77f2715c680cfdf1f5078a1f\": rpc error: code = NotFound desc = could not find container \"fe7763f236f2285bf969dc43d8f4d81e38dd250c77f2715c680cfdf1f5078a1f\": container with ID starting with fe7763f236f2285bf969dc43d8f4d81e38dd250c77f2715c680cfdf1f5078a1f not found: ID does not exist" Jan 26 14:47:29 crc kubenswrapper[4844]: I0126 14:47:29.326126 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74a8fe2c-290a-49e5-9e9b-a948d48fbef9" path="/var/lib/kubelet/pods/74a8fe2c-290a-49e5-9e9b-a948d48fbef9/volumes" Jan 26 14:47:30 crc kubenswrapper[4844]: I0126 14:47:30.313921 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:47:30 crc kubenswrapper[4844]: E0126 14:47:30.315010 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:47:45 crc kubenswrapper[4844]: I0126 14:47:45.314317 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:47:46 crc kubenswrapper[4844]: I0126 14:47:46.088447 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"f1d9ed368bf3314f0fcf60a78822b0e13b92dd28c5522c99c46976afa4696e06"} Jan 26 14:48:09 crc kubenswrapper[4844]: I0126 14:48:09.567924 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kxrp2"] Jan 26 14:48:09 crc kubenswrapper[4844]: E0126 14:48:09.569927 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a8fe2c-290a-49e5-9e9b-a948d48fbef9" containerName="extract-content" Jan 26 14:48:09 crc kubenswrapper[4844]: I0126 14:48:09.569946 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a8fe2c-290a-49e5-9e9b-a948d48fbef9" containerName="extract-content" Jan 26 14:48:09 crc kubenswrapper[4844]: E0126 14:48:09.569965 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a8fe2c-290a-49e5-9e9b-a948d48fbef9" containerName="registry-server" Jan 26 14:48:09 crc kubenswrapper[4844]: I0126 14:48:09.569973 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a8fe2c-290a-49e5-9e9b-a948d48fbef9" containerName="registry-server" Jan 26 14:48:09 crc kubenswrapper[4844]: E0126 14:48:09.569991 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a8fe2c-290a-49e5-9e9b-a948d48fbef9" containerName="extract-utilities" Jan 26 14:48:09 crc kubenswrapper[4844]: I0126 14:48:09.569999 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a8fe2c-290a-49e5-9e9b-a948d48fbef9" containerName="extract-utilities" Jan 26 14:48:09 crc kubenswrapper[4844]: I0126 14:48:09.570220 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="74a8fe2c-290a-49e5-9e9b-a948d48fbef9" containerName="registry-server" Jan 26 14:48:09 crc kubenswrapper[4844]: I0126 14:48:09.571865 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kxrp2" Jan 26 14:48:09 crc kubenswrapper[4844]: I0126 14:48:09.585557 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kxrp2"] Jan 26 14:48:09 crc kubenswrapper[4844]: I0126 14:48:09.720502 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnc9t\" (UniqueName: \"kubernetes.io/projected/c4a1db97-2bec-4496-9ff6-8604b2bea01b-kube-api-access-wnc9t\") pod \"redhat-operators-kxrp2\" (UID: \"c4a1db97-2bec-4496-9ff6-8604b2bea01b\") " pod="openshift-marketplace/redhat-operators-kxrp2" Jan 26 14:48:09 crc kubenswrapper[4844]: I0126 14:48:09.720931 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a1db97-2bec-4496-9ff6-8604b2bea01b-catalog-content\") pod \"redhat-operators-kxrp2\" (UID: \"c4a1db97-2bec-4496-9ff6-8604b2bea01b\") " pod="openshift-marketplace/redhat-operators-kxrp2" Jan 26 14:48:09 crc kubenswrapper[4844]: I0126 14:48:09.721356 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a1db97-2bec-4496-9ff6-8604b2bea01b-utilities\") pod \"redhat-operators-kxrp2\" (UID: \"c4a1db97-2bec-4496-9ff6-8604b2bea01b\") " pod="openshift-marketplace/redhat-operators-kxrp2" Jan 26 14:48:09 crc kubenswrapper[4844]: I0126 14:48:09.828307 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a1db97-2bec-4496-9ff6-8604b2bea01b-catalog-content\") pod \"redhat-operators-kxrp2\" (UID: \"c4a1db97-2bec-4496-9ff6-8604b2bea01b\") " pod="openshift-marketplace/redhat-operators-kxrp2" Jan 26 14:48:09 crc kubenswrapper[4844]: I0126 14:48:09.829072 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a1db97-2bec-4496-9ff6-8604b2bea01b-utilities\") pod \"redhat-operators-kxrp2\" (UID: \"c4a1db97-2bec-4496-9ff6-8604b2bea01b\") " pod="openshift-marketplace/redhat-operators-kxrp2" Jan 26 14:48:09 crc kubenswrapper[4844]: I0126 14:48:09.829113 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a1db97-2bec-4496-9ff6-8604b2bea01b-utilities\") pod \"redhat-operators-kxrp2\" (UID: \"c4a1db97-2bec-4496-9ff6-8604b2bea01b\") " pod="openshift-marketplace/redhat-operators-kxrp2" Jan 26 14:48:09 crc kubenswrapper[4844]: I0126 14:48:09.829125 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a1db97-2bec-4496-9ff6-8604b2bea01b-catalog-content\") pod \"redhat-operators-kxrp2\" (UID: \"c4a1db97-2bec-4496-9ff6-8604b2bea01b\") " pod="openshift-marketplace/redhat-operators-kxrp2" Jan 26 14:48:09 crc kubenswrapper[4844]: I0126 14:48:09.829398 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnc9t\" (UniqueName: \"kubernetes.io/projected/c4a1db97-2bec-4496-9ff6-8604b2bea01b-kube-api-access-wnc9t\") pod \"redhat-operators-kxrp2\" (UID: \"c4a1db97-2bec-4496-9ff6-8604b2bea01b\") " pod="openshift-marketplace/redhat-operators-kxrp2" Jan 26 14:48:09 crc kubenswrapper[4844]: I0126 14:48:09.858275 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnc9t\" (UniqueName: \"kubernetes.io/projected/c4a1db97-2bec-4496-9ff6-8604b2bea01b-kube-api-access-wnc9t\") pod \"redhat-operators-kxrp2\" (UID: \"c4a1db97-2bec-4496-9ff6-8604b2bea01b\") " pod="openshift-marketplace/redhat-operators-kxrp2" Jan 26 14:48:09 crc kubenswrapper[4844]: I0126 14:48:09.922780 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kxrp2" Jan 26 14:48:10 crc kubenswrapper[4844]: I0126 14:48:10.407536 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kxrp2"] Jan 26 14:48:11 crc kubenswrapper[4844]: I0126 14:48:11.379592 4844 generic.go:334] "Generic (PLEG): container finished" podID="c4a1db97-2bec-4496-9ff6-8604b2bea01b" containerID="9acc60793ed7be2ed9c963d3f788b5a4394a56804ec69f7efc4b3f93b80ffdee" exitCode=0 Jan 26 14:48:11 crc kubenswrapper[4844]: I0126 14:48:11.379868 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxrp2" event={"ID":"c4a1db97-2bec-4496-9ff6-8604b2bea01b","Type":"ContainerDied","Data":"9acc60793ed7be2ed9c963d3f788b5a4394a56804ec69f7efc4b3f93b80ffdee"} Jan 26 14:48:11 crc kubenswrapper[4844]: I0126 14:48:11.380470 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxrp2" event={"ID":"c4a1db97-2bec-4496-9ff6-8604b2bea01b","Type":"ContainerStarted","Data":"b51af8af5553f3253a7fe08bf5f2aa1e43b3a4666b3e3022ee0b37303c9e34eb"} Jan 26 14:48:11 crc kubenswrapper[4844]: I0126 14:48:11.384392 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:48:12 crc kubenswrapper[4844]: I0126 14:48:12.395188 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxrp2" event={"ID":"c4a1db97-2bec-4496-9ff6-8604b2bea01b","Type":"ContainerStarted","Data":"1c46dce0d62b47ee86c1281c9cbeb42a9bd73769c80464e27085385126b99a2a"} Jan 26 14:48:14 crc kubenswrapper[4844]: I0126 14:48:14.420667 4844 generic.go:334] "Generic (PLEG): container finished" podID="c4a1db97-2bec-4496-9ff6-8604b2bea01b" containerID="1c46dce0d62b47ee86c1281c9cbeb42a9bd73769c80464e27085385126b99a2a" exitCode=0 Jan 26 14:48:14 crc kubenswrapper[4844]: I0126 14:48:14.420971 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxrp2" event={"ID":"c4a1db97-2bec-4496-9ff6-8604b2bea01b","Type":"ContainerDied","Data":"1c46dce0d62b47ee86c1281c9cbeb42a9bd73769c80464e27085385126b99a2a"} Jan 26 14:48:15 crc kubenswrapper[4844]: I0126 14:48:15.434779 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxrp2" event={"ID":"c4a1db97-2bec-4496-9ff6-8604b2bea01b","Type":"ContainerStarted","Data":"8b43d96aeab452f3a8c0474f5c46d6de19ddf6bffc927f1a3b8b723c0a05d179"} Jan 26 14:48:15 crc kubenswrapper[4844]: I0126 14:48:15.470632 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kxrp2" podStartSLOduration=3.026972963 podStartE2EDuration="6.470612808s" podCreationTimestamp="2026-01-26 14:48:09 +0000 UTC" firstStartedPulling="2026-01-26 14:48:11.384092732 +0000 UTC m=+7468.317460344" lastFinishedPulling="2026-01-26 14:48:14.827732577 +0000 UTC m=+7471.761100189" observedRunningTime="2026-01-26 14:48:15.458557775 +0000 UTC m=+7472.391925387" watchObservedRunningTime="2026-01-26 14:48:15.470612808 +0000 UTC m=+7472.403980420" Jan 26 14:48:19 crc kubenswrapper[4844]: I0126 14:48:19.923949 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kxrp2" Jan 26 14:48:19 crc kubenswrapper[4844]: I0126 14:48:19.924301 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kxrp2" Jan 26 14:48:20 crc kubenswrapper[4844]: I0126 14:48:20.977116 4844 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kxrp2" podUID="c4a1db97-2bec-4496-9ff6-8604b2bea01b" containerName="registry-server" probeResult="failure" output=< Jan 26 14:48:20 crc kubenswrapper[4844]: timeout: failed to connect service ":50051" within 1s Jan 26 14:48:20 crc kubenswrapper[4844]: > Jan 26 14:48:30 crc kubenswrapper[4844]: I0126 14:48:29.999417 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kxrp2" Jan 26 14:48:30 crc kubenswrapper[4844]: I0126 14:48:30.055459 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kxrp2" Jan 26 14:48:30 crc kubenswrapper[4844]: I0126 14:48:30.254034 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kxrp2"] Jan 26 14:48:31 crc kubenswrapper[4844]: I0126 14:48:31.617245 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kxrp2" podUID="c4a1db97-2bec-4496-9ff6-8604b2bea01b" containerName="registry-server" containerID="cri-o://8b43d96aeab452f3a8c0474f5c46d6de19ddf6bffc927f1a3b8b723c0a05d179" gracePeriod=2 Jan 26 14:48:32 crc kubenswrapper[4844]: I0126 14:48:32.631094 4844 generic.go:334] "Generic (PLEG): container finished" podID="c4a1db97-2bec-4496-9ff6-8604b2bea01b" containerID="8b43d96aeab452f3a8c0474f5c46d6de19ddf6bffc927f1a3b8b723c0a05d179" exitCode=0 Jan 26 14:48:32 crc kubenswrapper[4844]: I0126 14:48:32.631146 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxrp2" event={"ID":"c4a1db97-2bec-4496-9ff6-8604b2bea01b","Type":"ContainerDied","Data":"8b43d96aeab452f3a8c0474f5c46d6de19ddf6bffc927f1a3b8b723c0a05d179"} Jan 26 14:48:32 crc kubenswrapper[4844]: I0126 14:48:32.631421 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxrp2" event={"ID":"c4a1db97-2bec-4496-9ff6-8604b2bea01b","Type":"ContainerDied","Data":"b51af8af5553f3253a7fe08bf5f2aa1e43b3a4666b3e3022ee0b37303c9e34eb"} Jan 26 14:48:32 crc kubenswrapper[4844]: I0126 14:48:32.631436 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b51af8af5553f3253a7fe08bf5f2aa1e43b3a4666b3e3022ee0b37303c9e34eb" Jan 26 14:48:32 crc kubenswrapper[4844]: I0126 14:48:32.670007 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kxrp2" Jan 26 14:48:32 crc kubenswrapper[4844]: I0126 14:48:32.777955 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a1db97-2bec-4496-9ff6-8604b2bea01b-utilities\") pod \"c4a1db97-2bec-4496-9ff6-8604b2bea01b\" (UID: \"c4a1db97-2bec-4496-9ff6-8604b2bea01b\") " Jan 26 14:48:32 crc kubenswrapper[4844]: I0126 14:48:32.778097 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a1db97-2bec-4496-9ff6-8604b2bea01b-catalog-content\") pod \"c4a1db97-2bec-4496-9ff6-8604b2bea01b\" (UID: \"c4a1db97-2bec-4496-9ff6-8604b2bea01b\") " Jan 26 14:48:32 crc kubenswrapper[4844]: I0126 14:48:32.778132 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnc9t\" (UniqueName: \"kubernetes.io/projected/c4a1db97-2bec-4496-9ff6-8604b2bea01b-kube-api-access-wnc9t\") pod \"c4a1db97-2bec-4496-9ff6-8604b2bea01b\" (UID: \"c4a1db97-2bec-4496-9ff6-8604b2bea01b\") " Jan 26 14:48:32 crc kubenswrapper[4844]: I0126 14:48:32.778757 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4a1db97-2bec-4496-9ff6-8604b2bea01b-utilities" (OuterVolumeSpecName: "utilities") pod "c4a1db97-2bec-4496-9ff6-8604b2bea01b" (UID: "c4a1db97-2bec-4496-9ff6-8604b2bea01b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:48:32 crc kubenswrapper[4844]: I0126 14:48:32.788023 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a1db97-2bec-4496-9ff6-8604b2bea01b-kube-api-access-wnc9t" (OuterVolumeSpecName: "kube-api-access-wnc9t") pod "c4a1db97-2bec-4496-9ff6-8604b2bea01b" (UID: "c4a1db97-2bec-4496-9ff6-8604b2bea01b"). InnerVolumeSpecName "kube-api-access-wnc9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:48:32 crc kubenswrapper[4844]: I0126 14:48:32.880388 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a1db97-2bec-4496-9ff6-8604b2bea01b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:48:32 crc kubenswrapper[4844]: I0126 14:48:32.880732 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnc9t\" (UniqueName: \"kubernetes.io/projected/c4a1db97-2bec-4496-9ff6-8604b2bea01b-kube-api-access-wnc9t\") on node \"crc\" DevicePath \"\"" Jan 26 14:48:32 crc kubenswrapper[4844]: I0126 14:48:32.900779 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4a1db97-2bec-4496-9ff6-8604b2bea01b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4a1db97-2bec-4496-9ff6-8604b2bea01b" (UID: "c4a1db97-2bec-4496-9ff6-8604b2bea01b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:48:32 crc kubenswrapper[4844]: I0126 14:48:32.983528 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a1db97-2bec-4496-9ff6-8604b2bea01b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:48:33 crc kubenswrapper[4844]: I0126 14:48:33.644845 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kxrp2" Jan 26 14:48:33 crc kubenswrapper[4844]: I0126 14:48:33.683796 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kxrp2"] Jan 26 14:48:33 crc kubenswrapper[4844]: I0126 14:48:33.695739 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kxrp2"] Jan 26 14:48:35 crc kubenswrapper[4844]: I0126 14:48:35.336149 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4a1db97-2bec-4496-9ff6-8604b2bea01b" path="/var/lib/kubelet/pods/c4a1db97-2bec-4496-9ff6-8604b2bea01b/volumes" Jan 26 14:49:17 crc kubenswrapper[4844]: I0126 14:49:17.741990 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tcmth/must-gather-jcdp7"] Jan 26 14:49:17 crc kubenswrapper[4844]: E0126 14:49:17.742794 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a1db97-2bec-4496-9ff6-8604b2bea01b" containerName="extract-content" Jan 26 14:49:17 crc kubenswrapper[4844]: I0126 14:49:17.742806 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a1db97-2bec-4496-9ff6-8604b2bea01b" containerName="extract-content" Jan 26 14:49:17 crc kubenswrapper[4844]: E0126 14:49:17.742822 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a1db97-2bec-4496-9ff6-8604b2bea01b" containerName="extract-utilities" Jan 26 14:49:17 crc kubenswrapper[4844]: I0126 14:49:17.742828 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a1db97-2bec-4496-9ff6-8604b2bea01b" containerName="extract-utilities" Jan 26 14:49:17 crc kubenswrapper[4844]: E0126 14:49:17.742846 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a1db97-2bec-4496-9ff6-8604b2bea01b" containerName="registry-server" Jan 26 14:49:17 crc kubenswrapper[4844]: I0126 14:49:17.742852 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a1db97-2bec-4496-9ff6-8604b2bea01b" containerName="registry-server" Jan 26 14:49:17 crc kubenswrapper[4844]: I0126 14:49:17.743030 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4a1db97-2bec-4496-9ff6-8604b2bea01b" containerName="registry-server" Jan 26 14:49:17 crc kubenswrapper[4844]: I0126 14:49:17.744460 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tcmth/must-gather-jcdp7" Jan 26 14:49:17 crc kubenswrapper[4844]: I0126 14:49:17.746338 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tcmth"/"default-dockercfg-hzzkd" Jan 26 14:49:17 crc kubenswrapper[4844]: I0126 14:49:17.746832 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tcmth"/"openshift-service-ca.crt" Jan 26 14:49:17 crc kubenswrapper[4844]: I0126 14:49:17.747395 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tcmth"/"kube-root-ca.crt" Jan 26 14:49:17 crc kubenswrapper[4844]: I0126 14:49:17.761410 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tcmth/must-gather-jcdp7"] Jan 26 14:49:17 crc kubenswrapper[4844]: I0126 14:49:17.846431 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4c5fbe1a-040b-44a2-8468-00f0a257c5cd-must-gather-output\") pod \"must-gather-jcdp7\" (UID: \"4c5fbe1a-040b-44a2-8468-00f0a257c5cd\") " pod="openshift-must-gather-tcmth/must-gather-jcdp7" Jan 26 14:49:17 crc kubenswrapper[4844]: I0126 14:49:17.846581 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz8q8\" (UniqueName: \"kubernetes.io/projected/4c5fbe1a-040b-44a2-8468-00f0a257c5cd-kube-api-access-gz8q8\") pod \"must-gather-jcdp7\" (UID: \"4c5fbe1a-040b-44a2-8468-00f0a257c5cd\") " pod="openshift-must-gather-tcmth/must-gather-jcdp7" Jan 26 14:49:17 crc kubenswrapper[4844]: I0126 14:49:17.948512 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz8q8\" (UniqueName: \"kubernetes.io/projected/4c5fbe1a-040b-44a2-8468-00f0a257c5cd-kube-api-access-gz8q8\") pod \"must-gather-jcdp7\" (UID: \"4c5fbe1a-040b-44a2-8468-00f0a257c5cd\") " pod="openshift-must-gather-tcmth/must-gather-jcdp7" Jan 26 14:49:17 crc kubenswrapper[4844]: I0126 14:49:17.948669 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4c5fbe1a-040b-44a2-8468-00f0a257c5cd-must-gather-output\") pod \"must-gather-jcdp7\" (UID: \"4c5fbe1a-040b-44a2-8468-00f0a257c5cd\") " pod="openshift-must-gather-tcmth/must-gather-jcdp7" Jan 26 14:49:17 crc kubenswrapper[4844]: I0126 14:49:17.949121 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4c5fbe1a-040b-44a2-8468-00f0a257c5cd-must-gather-output\") pod \"must-gather-jcdp7\" (UID: \"4c5fbe1a-040b-44a2-8468-00f0a257c5cd\") " pod="openshift-must-gather-tcmth/must-gather-jcdp7" Jan 26 14:49:17 crc kubenswrapper[4844]: I0126 14:49:17.975791 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz8q8\" (UniqueName: \"kubernetes.io/projected/4c5fbe1a-040b-44a2-8468-00f0a257c5cd-kube-api-access-gz8q8\") pod \"must-gather-jcdp7\" (UID: \"4c5fbe1a-040b-44a2-8468-00f0a257c5cd\") " pod="openshift-must-gather-tcmth/must-gather-jcdp7" Jan 26 14:49:18 crc kubenswrapper[4844]: I0126 14:49:18.074172 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tcmth/must-gather-jcdp7" Jan 26 14:49:18 crc kubenswrapper[4844]: W0126 14:49:18.547990 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c5fbe1a_040b_44a2_8468_00f0a257c5cd.slice/crio-ebd6875946fb4736de6418c4996cadd74495ec3ec6585c5ec4c527e488f0809d WatchSource:0}: Error finding container ebd6875946fb4736de6418c4996cadd74495ec3ec6585c5ec4c527e488f0809d: Status 404 returned error can't find the container with id ebd6875946fb4736de6418c4996cadd74495ec3ec6585c5ec4c527e488f0809d Jan 26 14:49:18 crc kubenswrapper[4844]: I0126 14:49:18.551018 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tcmth/must-gather-jcdp7"] Jan 26 14:49:19 crc kubenswrapper[4844]: I0126 14:49:19.123391 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tcmth/must-gather-jcdp7" event={"ID":"4c5fbe1a-040b-44a2-8468-00f0a257c5cd","Type":"ContainerStarted","Data":"1aa93d32dba036a28b469ff493e141b09929990cc59a8f50444417a004223539"} Jan 26 14:49:19 crc kubenswrapper[4844]: I0126 14:49:19.123733 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tcmth/must-gather-jcdp7" event={"ID":"4c5fbe1a-040b-44a2-8468-00f0a257c5cd","Type":"ContainerStarted","Data":"ebd6875946fb4736de6418c4996cadd74495ec3ec6585c5ec4c527e488f0809d"} Jan 26 14:49:20 crc kubenswrapper[4844]: I0126 14:49:20.135755 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tcmth/must-gather-jcdp7" event={"ID":"4c5fbe1a-040b-44a2-8468-00f0a257c5cd","Type":"ContainerStarted","Data":"d60955e546efc5120e92a9ef2b72f58c7dda0308de6ac30cd1a58707b9ae1a0d"} Jan 26 14:49:20 crc kubenswrapper[4844]: I0126 14:49:20.163727 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tcmth/must-gather-jcdp7" podStartSLOduration=3.163700461 podStartE2EDuration="3.163700461s" podCreationTimestamp="2026-01-26 14:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:20.152228462 +0000 UTC m=+7537.085596074" watchObservedRunningTime="2026-01-26 14:49:20.163700461 +0000 UTC m=+7537.097068073" Jan 26 14:49:23 crc kubenswrapper[4844]: I0126 14:49:23.054386 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tcmth/crc-debug-vfntn"] Jan 26 14:49:23 crc kubenswrapper[4844]: I0126 14:49:23.056664 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tcmth/crc-debug-vfntn" Jan 26 14:49:23 crc kubenswrapper[4844]: I0126 14:49:23.182656 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9drjd\" (UniqueName: \"kubernetes.io/projected/46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5-kube-api-access-9drjd\") pod \"crc-debug-vfntn\" (UID: \"46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5\") " pod="openshift-must-gather-tcmth/crc-debug-vfntn" Jan 26 14:49:23 crc kubenswrapper[4844]: I0126 14:49:23.182743 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5-host\") pod \"crc-debug-vfntn\" (UID: \"46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5\") " pod="openshift-must-gather-tcmth/crc-debug-vfntn" Jan 26 14:49:23 crc kubenswrapper[4844]: I0126 14:49:23.285235 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5-host\") pod \"crc-debug-vfntn\" (UID: \"46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5\") " pod="openshift-must-gather-tcmth/crc-debug-vfntn" Jan 26 14:49:23 crc kubenswrapper[4844]: I0126 14:49:23.285344 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5-host\") pod \"crc-debug-vfntn\" (UID: \"46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5\") " pod="openshift-must-gather-tcmth/crc-debug-vfntn" Jan 26 14:49:23 crc kubenswrapper[4844]: I0126 14:49:23.285655 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9drjd\" (UniqueName: \"kubernetes.io/projected/46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5-kube-api-access-9drjd\") pod \"crc-debug-vfntn\" (UID: \"46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5\") " pod="openshift-must-gather-tcmth/crc-debug-vfntn" Jan 26 14:49:23 crc kubenswrapper[4844]: I0126 14:49:23.314989 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9drjd\" (UniqueName: \"kubernetes.io/projected/46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5-kube-api-access-9drjd\") pod \"crc-debug-vfntn\" (UID: \"46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5\") " pod="openshift-must-gather-tcmth/crc-debug-vfntn" Jan 26 14:49:23 crc kubenswrapper[4844]: I0126 14:49:23.376542 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tcmth/crc-debug-vfntn" Jan 26 14:49:24 crc kubenswrapper[4844]: I0126 14:49:24.173586 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tcmth/crc-debug-vfntn" event={"ID":"46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5","Type":"ContainerStarted","Data":"d3bc9110917b6f32b88b43a7e89f1ae2a5d1bd4d81aa595463745bd3783cca6c"} Jan 26 14:49:24 crc kubenswrapper[4844]: E0126 14:49:24.838792 4844 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.142:34222->38.102.83.142:35401: write tcp 38.102.83.142:34222->38.102.83.142:35401: write: broken pipe Jan 26 14:49:25 crc kubenswrapper[4844]: I0126 14:49:25.181977 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tcmth/crc-debug-vfntn" event={"ID":"46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5","Type":"ContainerStarted","Data":"c16484c2a2d73b25ccbd0d0357d0b8e39f55b6daddea97443aa2f8c6ede64f97"} Jan 26 14:49:25 crc kubenswrapper[4844]: I0126 14:49:25.205119 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tcmth/crc-debug-vfntn" podStartSLOduration=2.205104295 podStartE2EDuration="2.205104295s" podCreationTimestamp="2026-01-26 14:49:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:25.19295198 +0000 UTC m=+7542.126319582" watchObservedRunningTime="2026-01-26 14:49:25.205104295 +0000 UTC m=+7542.138471907" Jan 26 14:50:06 crc kubenswrapper[4844]: I0126 14:50:06.364883 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:50:06 crc kubenswrapper[4844]: I0126 14:50:06.365429 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:50:10 crc kubenswrapper[4844]: I0126 14:50:10.606018 4844 generic.go:334] "Generic (PLEG): container finished" podID="46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5" containerID="c16484c2a2d73b25ccbd0d0357d0b8e39f55b6daddea97443aa2f8c6ede64f97" exitCode=0 Jan 26 14:50:10 crc kubenswrapper[4844]: I0126 14:50:10.606135 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tcmth/crc-debug-vfntn" event={"ID":"46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5","Type":"ContainerDied","Data":"c16484c2a2d73b25ccbd0d0357d0b8e39f55b6daddea97443aa2f8c6ede64f97"} Jan 26 14:50:11 crc kubenswrapper[4844]: I0126 14:50:11.758707 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tcmth/crc-debug-vfntn" Jan 26 14:50:11 crc kubenswrapper[4844]: I0126 14:50:11.797806 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tcmth/crc-debug-vfntn"] Jan 26 14:50:11 crc kubenswrapper[4844]: I0126 14:50:11.809277 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tcmth/crc-debug-vfntn"] Jan 26 14:50:11 crc kubenswrapper[4844]: I0126 14:50:11.854576 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9drjd\" (UniqueName: \"kubernetes.io/projected/46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5-kube-api-access-9drjd\") pod \"46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5\" (UID: \"46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5\") " Jan 26 14:50:11 crc kubenswrapper[4844]: I0126 14:50:11.854661 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5-host\") pod \"46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5\" (UID: \"46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5\") " Jan 26 14:50:11 crc kubenswrapper[4844]: I0126 14:50:11.855018 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5-host" (OuterVolumeSpecName: "host") pod "46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5" (UID: "46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:50:11 crc kubenswrapper[4844]: I0126 14:50:11.855231 4844 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5-host\") on node \"crc\" DevicePath \"\"" Jan 26 14:50:11 crc kubenswrapper[4844]: I0126 14:50:11.860924 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5-kube-api-access-9drjd" (OuterVolumeSpecName: "kube-api-access-9drjd") pod "46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5" (UID: "46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5"). InnerVolumeSpecName "kube-api-access-9drjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:50:11 crc kubenswrapper[4844]: I0126 14:50:11.957781 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9drjd\" (UniqueName: \"kubernetes.io/projected/46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5-kube-api-access-9drjd\") on node \"crc\" DevicePath \"\"" Jan 26 14:50:12 crc kubenswrapper[4844]: I0126 14:50:12.625167 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3bc9110917b6f32b88b43a7e89f1ae2a5d1bd4d81aa595463745bd3783cca6c" Jan 26 14:50:12 crc kubenswrapper[4844]: I0126 14:50:12.625250 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tcmth/crc-debug-vfntn" Jan 26 14:50:13 crc kubenswrapper[4844]: I0126 14:50:13.008088 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tcmth/crc-debug-w9x8k"] Jan 26 14:50:13 crc kubenswrapper[4844]: E0126 14:50:13.008505 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5" containerName="container-00" Jan 26 14:50:13 crc kubenswrapper[4844]: I0126 14:50:13.008518 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5" containerName="container-00" Jan 26 14:50:13 crc kubenswrapper[4844]: I0126 14:50:13.008721 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5" containerName="container-00" Jan 26 14:50:13 crc kubenswrapper[4844]: I0126 14:50:13.009340 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tcmth/crc-debug-w9x8k" Jan 26 14:50:13 crc kubenswrapper[4844]: I0126 14:50:13.181243 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cbe0ddc-8239-4783-9b07-a4d26ac2af21-host\") pod \"crc-debug-w9x8k\" (UID: \"1cbe0ddc-8239-4783-9b07-a4d26ac2af21\") " pod="openshift-must-gather-tcmth/crc-debug-w9x8k" Jan 26 14:50:13 crc kubenswrapper[4844]: I0126 14:50:13.181299 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wswqh\" (UniqueName: \"kubernetes.io/projected/1cbe0ddc-8239-4783-9b07-a4d26ac2af21-kube-api-access-wswqh\") pod \"crc-debug-w9x8k\" (UID: \"1cbe0ddc-8239-4783-9b07-a4d26ac2af21\") " pod="openshift-must-gather-tcmth/crc-debug-w9x8k" Jan 26 14:50:13 crc kubenswrapper[4844]: I0126 14:50:13.282660 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cbe0ddc-8239-4783-9b07-a4d26ac2af21-host\") pod \"crc-debug-w9x8k\" (UID: \"1cbe0ddc-8239-4783-9b07-a4d26ac2af21\") " pod="openshift-must-gather-tcmth/crc-debug-w9x8k" Jan 26 14:50:13 crc kubenswrapper[4844]: I0126 14:50:13.282725 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wswqh\" (UniqueName: \"kubernetes.io/projected/1cbe0ddc-8239-4783-9b07-a4d26ac2af21-kube-api-access-wswqh\") pod \"crc-debug-w9x8k\" (UID: \"1cbe0ddc-8239-4783-9b07-a4d26ac2af21\") " pod="openshift-must-gather-tcmth/crc-debug-w9x8k" Jan 26 14:50:13 crc kubenswrapper[4844]: I0126 14:50:13.282847 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cbe0ddc-8239-4783-9b07-a4d26ac2af21-host\") pod \"crc-debug-w9x8k\" (UID: \"1cbe0ddc-8239-4783-9b07-a4d26ac2af21\") " pod="openshift-must-gather-tcmth/crc-debug-w9x8k" Jan 26 14:50:13 crc kubenswrapper[4844]: I0126 14:50:13.312782 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wswqh\" (UniqueName: \"kubernetes.io/projected/1cbe0ddc-8239-4783-9b07-a4d26ac2af21-kube-api-access-wswqh\") pod \"crc-debug-w9x8k\" (UID: \"1cbe0ddc-8239-4783-9b07-a4d26ac2af21\") " pod="openshift-must-gather-tcmth/crc-debug-w9x8k" Jan 26 14:50:13 crc kubenswrapper[4844]: I0126 14:50:13.328275 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5" path="/var/lib/kubelet/pods/46ee2a33-c5b1-45a2-9e3f-26f7b30b95a5/volumes" Jan 26 14:50:13 crc kubenswrapper[4844]: I0126 14:50:13.332933 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tcmth/crc-debug-w9x8k" Jan 26 14:50:13 crc kubenswrapper[4844]: I0126 14:50:13.635586 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tcmth/crc-debug-w9x8k" event={"ID":"1cbe0ddc-8239-4783-9b07-a4d26ac2af21","Type":"ContainerStarted","Data":"9bc09bb912be6593d8631ee0f2ace4e17f017d06d5cb47dd3248398bf8ea4c0f"} Jan 26 14:50:14 crc kubenswrapper[4844]: I0126 14:50:14.647164 4844 generic.go:334] "Generic (PLEG): container finished" podID="1cbe0ddc-8239-4783-9b07-a4d26ac2af21" containerID="ac9189989649c27db698fb40330a6308e1590af7d181a0531b0c77aeaef25f1d" exitCode=0 Jan 26 14:50:14 crc kubenswrapper[4844]: I0126 14:50:14.647245 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tcmth/crc-debug-w9x8k" event={"ID":"1cbe0ddc-8239-4783-9b07-a4d26ac2af21","Type":"ContainerDied","Data":"ac9189989649c27db698fb40330a6308e1590af7d181a0531b0c77aeaef25f1d"} Jan 26 14:50:15 crc kubenswrapper[4844]: I0126 14:50:15.778966 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tcmth/crc-debug-w9x8k" Jan 26 14:50:15 crc kubenswrapper[4844]: I0126 14:50:15.942192 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wswqh\" (UniqueName: \"kubernetes.io/projected/1cbe0ddc-8239-4783-9b07-a4d26ac2af21-kube-api-access-wswqh\") pod \"1cbe0ddc-8239-4783-9b07-a4d26ac2af21\" (UID: \"1cbe0ddc-8239-4783-9b07-a4d26ac2af21\") " Jan 26 14:50:15 crc kubenswrapper[4844]: I0126 14:50:15.942361 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cbe0ddc-8239-4783-9b07-a4d26ac2af21-host\") pod \"1cbe0ddc-8239-4783-9b07-a4d26ac2af21\" (UID: \"1cbe0ddc-8239-4783-9b07-a4d26ac2af21\") " Jan 26 14:50:15 crc kubenswrapper[4844]: I0126 14:50:15.942916 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cbe0ddc-8239-4783-9b07-a4d26ac2af21-host" (OuterVolumeSpecName: "host") pod "1cbe0ddc-8239-4783-9b07-a4d26ac2af21" (UID: "1cbe0ddc-8239-4783-9b07-a4d26ac2af21"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:50:15 crc kubenswrapper[4844]: I0126 14:50:15.957279 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cbe0ddc-8239-4783-9b07-a4d26ac2af21-kube-api-access-wswqh" (OuterVolumeSpecName: "kube-api-access-wswqh") pod "1cbe0ddc-8239-4783-9b07-a4d26ac2af21" (UID: "1cbe0ddc-8239-4783-9b07-a4d26ac2af21"). InnerVolumeSpecName "kube-api-access-wswqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:50:16 crc kubenswrapper[4844]: I0126 14:50:16.044507 4844 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cbe0ddc-8239-4783-9b07-a4d26ac2af21-host\") on node \"crc\" DevicePath \"\"" Jan 26 14:50:16 crc kubenswrapper[4844]: I0126 14:50:16.044847 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wswqh\" (UniqueName: \"kubernetes.io/projected/1cbe0ddc-8239-4783-9b07-a4d26ac2af21-kube-api-access-wswqh\") on node \"crc\" DevicePath \"\"" Jan 26 14:50:16 crc kubenswrapper[4844]: I0126 14:50:16.668019 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tcmth/crc-debug-w9x8k" event={"ID":"1cbe0ddc-8239-4783-9b07-a4d26ac2af21","Type":"ContainerDied","Data":"9bc09bb912be6593d8631ee0f2ace4e17f017d06d5cb47dd3248398bf8ea4c0f"} Jan 26 14:50:16 crc kubenswrapper[4844]: I0126 14:50:16.668335 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bc09bb912be6593d8631ee0f2ace4e17f017d06d5cb47dd3248398bf8ea4c0f" Jan 26 14:50:16 crc kubenswrapper[4844]: I0126 14:50:16.668089 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tcmth/crc-debug-w9x8k" Jan 26 14:50:17 crc kubenswrapper[4844]: I0126 14:50:17.212251 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tcmth/crc-debug-w9x8k"] Jan 26 14:50:17 crc kubenswrapper[4844]: I0126 14:50:17.225154 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tcmth/crc-debug-w9x8k"] Jan 26 14:50:17 crc kubenswrapper[4844]: I0126 14:50:17.326421 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cbe0ddc-8239-4783-9b07-a4d26ac2af21" path="/var/lib/kubelet/pods/1cbe0ddc-8239-4783-9b07-a4d26ac2af21/volumes" Jan 26 14:50:18 crc kubenswrapper[4844]: I0126 14:50:18.415394 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tcmth/crc-debug-xbh26"] Jan 26 14:50:18 crc kubenswrapper[4844]: E0126 14:50:18.416055 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbe0ddc-8239-4783-9b07-a4d26ac2af21" containerName="container-00" Jan 26 14:50:18 crc kubenswrapper[4844]: I0126 14:50:18.416068 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbe0ddc-8239-4783-9b07-a4d26ac2af21" containerName="container-00" Jan 26 14:50:18 crc kubenswrapper[4844]: I0126 14:50:18.416288 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cbe0ddc-8239-4783-9b07-a4d26ac2af21" containerName="container-00" Jan 26 14:50:18 crc kubenswrapper[4844]: I0126 14:50:18.417390 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tcmth/crc-debug-xbh26" Jan 26 14:50:18 crc kubenswrapper[4844]: I0126 14:50:18.503387 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnkdz\" (UniqueName: \"kubernetes.io/projected/0c110c78-5474-48ff-8aac-dd6f56ce0426-kube-api-access-bnkdz\") pod \"crc-debug-xbh26\" (UID: \"0c110c78-5474-48ff-8aac-dd6f56ce0426\") " pod="openshift-must-gather-tcmth/crc-debug-xbh26" Jan 26 14:50:18 crc kubenswrapper[4844]: I0126 14:50:18.503528 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c110c78-5474-48ff-8aac-dd6f56ce0426-host\") pod \"crc-debug-xbh26\" (UID: \"0c110c78-5474-48ff-8aac-dd6f56ce0426\") " pod="openshift-must-gather-tcmth/crc-debug-xbh26" Jan 26 14:50:18 crc kubenswrapper[4844]: I0126 14:50:18.605760 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c110c78-5474-48ff-8aac-dd6f56ce0426-host\") pod \"crc-debug-xbh26\" (UID: \"0c110c78-5474-48ff-8aac-dd6f56ce0426\") " pod="openshift-must-gather-tcmth/crc-debug-xbh26" Jan 26 14:50:18 crc kubenswrapper[4844]: I0126 14:50:18.605932 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnkdz\" (UniqueName: \"kubernetes.io/projected/0c110c78-5474-48ff-8aac-dd6f56ce0426-kube-api-access-bnkdz\") pod \"crc-debug-xbh26\" (UID: \"0c110c78-5474-48ff-8aac-dd6f56ce0426\") " pod="openshift-must-gather-tcmth/crc-debug-xbh26" Jan 26 14:50:18 crc kubenswrapper[4844]: I0126 14:50:18.605949 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c110c78-5474-48ff-8aac-dd6f56ce0426-host\") pod \"crc-debug-xbh26\" (UID: \"0c110c78-5474-48ff-8aac-dd6f56ce0426\") " pod="openshift-must-gather-tcmth/crc-debug-xbh26" Jan 26 14:50:18 crc kubenswrapper[4844]: I0126 14:50:18.623456 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnkdz\" (UniqueName: \"kubernetes.io/projected/0c110c78-5474-48ff-8aac-dd6f56ce0426-kube-api-access-bnkdz\") pod \"crc-debug-xbh26\" (UID: \"0c110c78-5474-48ff-8aac-dd6f56ce0426\") " pod="openshift-must-gather-tcmth/crc-debug-xbh26" Jan 26 14:50:18 crc kubenswrapper[4844]: I0126 14:50:18.736139 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tcmth/crc-debug-xbh26" Jan 26 14:50:19 crc kubenswrapper[4844]: I0126 14:50:19.695452 4844 generic.go:334] "Generic (PLEG): container finished" podID="0c110c78-5474-48ff-8aac-dd6f56ce0426" containerID="753d53593735b49cd26c0de98cc323693b6dd832f9be16f73ee92bf7da496199" exitCode=0 Jan 26 14:50:19 crc kubenswrapper[4844]: I0126 14:50:19.695542 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tcmth/crc-debug-xbh26" event={"ID":"0c110c78-5474-48ff-8aac-dd6f56ce0426","Type":"ContainerDied","Data":"753d53593735b49cd26c0de98cc323693b6dd832f9be16f73ee92bf7da496199"} Jan 26 14:50:19 crc kubenswrapper[4844]: I0126 14:50:19.695999 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tcmth/crc-debug-xbh26" event={"ID":"0c110c78-5474-48ff-8aac-dd6f56ce0426","Type":"ContainerStarted","Data":"3c41279ac689899b0506269ba2400fd1699f62f6967fd57dda3e2f0fff981768"} Jan 26 14:50:19 crc kubenswrapper[4844]: I0126 14:50:19.737775 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tcmth/crc-debug-xbh26"] Jan 26 14:50:19 crc kubenswrapper[4844]: I0126 14:50:19.746096 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tcmth/crc-debug-xbh26"] Jan 26 14:50:20 crc kubenswrapper[4844]: I0126 14:50:20.803201 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tcmth/crc-debug-xbh26" Jan 26 14:50:20 crc kubenswrapper[4844]: I0126 14:50:20.847974 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c110c78-5474-48ff-8aac-dd6f56ce0426-host\") pod \"0c110c78-5474-48ff-8aac-dd6f56ce0426\" (UID: \"0c110c78-5474-48ff-8aac-dd6f56ce0426\") " Jan 26 14:50:20 crc kubenswrapper[4844]: I0126 14:50:20.848066 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnkdz\" (UniqueName: \"kubernetes.io/projected/0c110c78-5474-48ff-8aac-dd6f56ce0426-kube-api-access-bnkdz\") pod \"0c110c78-5474-48ff-8aac-dd6f56ce0426\" (UID: \"0c110c78-5474-48ff-8aac-dd6f56ce0426\") " Jan 26 14:50:20 crc kubenswrapper[4844]: I0126 14:50:20.848097 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c110c78-5474-48ff-8aac-dd6f56ce0426-host" (OuterVolumeSpecName: "host") pod "0c110c78-5474-48ff-8aac-dd6f56ce0426" (UID: "0c110c78-5474-48ff-8aac-dd6f56ce0426"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:50:20 crc kubenswrapper[4844]: I0126 14:50:20.848677 4844 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c110c78-5474-48ff-8aac-dd6f56ce0426-host\") on node \"crc\" DevicePath \"\"" Jan 26 14:50:20 crc kubenswrapper[4844]: I0126 14:50:20.854550 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c110c78-5474-48ff-8aac-dd6f56ce0426-kube-api-access-bnkdz" (OuterVolumeSpecName: "kube-api-access-bnkdz") pod "0c110c78-5474-48ff-8aac-dd6f56ce0426" (UID: "0c110c78-5474-48ff-8aac-dd6f56ce0426"). InnerVolumeSpecName "kube-api-access-bnkdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:50:20 crc kubenswrapper[4844]: I0126 14:50:20.950514 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnkdz\" (UniqueName: \"kubernetes.io/projected/0c110c78-5474-48ff-8aac-dd6f56ce0426-kube-api-access-bnkdz\") on node \"crc\" DevicePath \"\"" Jan 26 14:50:21 crc kubenswrapper[4844]: I0126 14:50:21.328360 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c110c78-5474-48ff-8aac-dd6f56ce0426" path="/var/lib/kubelet/pods/0c110c78-5474-48ff-8aac-dd6f56ce0426/volumes" Jan 26 14:50:21 crc kubenswrapper[4844]: I0126 14:50:21.715497 4844 scope.go:117] "RemoveContainer" containerID="753d53593735b49cd26c0de98cc323693b6dd832f9be16f73ee92bf7da496199" Jan 26 14:50:21 crc kubenswrapper[4844]: I0126 14:50:21.715886 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tcmth/crc-debug-xbh26" Jan 26 14:50:36 crc kubenswrapper[4844]: I0126 14:50:36.365039 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:50:36 crc kubenswrapper[4844]: I0126 14:50:36.366824 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:51:05 crc kubenswrapper[4844]: I0126 14:51:05.110457 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-58b8c47bc6-5s5z9_7f2cf574-1917-4f2b-adba-02bcf6cb4dc8/barbican-api/0.log" Jan 26 14:51:05 crc kubenswrapper[4844]: I0126 14:51:05.303519 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-58b8c47bc6-5s5z9_7f2cf574-1917-4f2b-adba-02bcf6cb4dc8/barbican-api-log/0.log" Jan 26 14:51:05 crc kubenswrapper[4844]: I0126 14:51:05.387910 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-688b4ff97d-t5mvg_56958656-f467-485d-a3b6-9ecacb7edfeb/barbican-keystone-listener/0.log" Jan 26 14:51:05 crc kubenswrapper[4844]: I0126 14:51:05.393499 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-688b4ff97d-t5mvg_56958656-f467-485d-a3b6-9ecacb7edfeb/barbican-keystone-listener-log/0.log" Jan 26 14:51:05 crc kubenswrapper[4844]: I0126 14:51:05.581527 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5757498f95-q5d7h_f64e9d9a-09d6-4843-a829-d4fbdcaadb65/barbican-worker/0.log" Jan 26 14:51:05 crc kubenswrapper[4844]: I0126 14:51:05.612426 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5757498f95-q5d7h_f64e9d9a-09d6-4843-a829-d4fbdcaadb65/barbican-worker-log/0.log" Jan 26 14:51:05 crc kubenswrapper[4844]: I0126 14:51:05.741187 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-88p79_c1079155-3798-4f39-ab56-dffea2038df8/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:51:05 crc kubenswrapper[4844]: I0126 14:51:05.929230 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fb03b4d3-5582-4758-a585-5f8e82a306da/ceilometer-notification-agent/0.log" Jan 26 14:51:05 crc kubenswrapper[4844]: I0126 14:51:05.943714 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fb03b4d3-5582-4758-a585-5f8e82a306da/ceilometer-central-agent/0.log" Jan 26 14:51:05 crc kubenswrapper[4844]: I0126 14:51:05.994863 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fb03b4d3-5582-4758-a585-5f8e82a306da/proxy-httpd/0.log" Jan 26 14:51:06 crc kubenswrapper[4844]: I0126 14:51:06.069819 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fb03b4d3-5582-4758-a585-5f8e82a306da/sg-core/0.log" Jan 26 14:51:06 crc kubenswrapper[4844]: I0126 14:51:06.246504 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a34d9864-c377-4ca1-a4fe-512bf9292130/cinder-api-log/0.log" Jan 26 14:51:06 crc kubenswrapper[4844]: I0126 14:51:06.364403 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:51:06 crc kubenswrapper[4844]: I0126 14:51:06.364453 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:51:06 crc kubenswrapper[4844]: I0126 14:51:06.364493 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 14:51:06 crc kubenswrapper[4844]: I0126 14:51:06.365245 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1d9ed368bf3314f0fcf60a78822b0e13b92dd28c5522c99c46976afa4696e06"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:51:06 crc kubenswrapper[4844]: I0126 14:51:06.365299 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://f1d9ed368bf3314f0fcf60a78822b0e13b92dd28c5522c99c46976afa4696e06" gracePeriod=600 Jan 26 14:51:06 crc kubenswrapper[4844]: I0126 14:51:06.518156 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_2da46443-17b2-425a-ad97-c2dcae16074b/probe/0.log" Jan 26 14:51:06 crc kubenswrapper[4844]: I0126 14:51:06.850396 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a34d9864-c377-4ca1-a4fe-512bf9292130/cinder-api/0.log" Jan 26 14:51:06 crc kubenswrapper[4844]: I0126 14:51:06.912431 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_47c752dd-0b96-464c-9cb4-3251fc31556a/cinder-scheduler/0.log" Jan 26 14:51:06 crc kubenswrapper[4844]: I0126 14:51:06.931261 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_2da46443-17b2-425a-ad97-c2dcae16074b/cinder-backup/0.log" Jan 26 14:51:06 crc kubenswrapper[4844]: I0126 14:51:06.958887 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_47c752dd-0b96-464c-9cb4-3251fc31556a/probe/0.log" Jan 26 14:51:07 crc kubenswrapper[4844]: I0126 14:51:07.177323 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="f1d9ed368bf3314f0fcf60a78822b0e13b92dd28c5522c99c46976afa4696e06" exitCode=0 Jan 26 14:51:07 crc kubenswrapper[4844]: I0126 14:51:07.177369 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"f1d9ed368bf3314f0fcf60a78822b0e13b92dd28c5522c99c46976afa4696e06"} Jan 26 14:51:07 crc kubenswrapper[4844]: I0126 14:51:07.177404 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e"} Jan 26 14:51:07 crc kubenswrapper[4844]: I0126 14:51:07.177421 4844 scope.go:117] "RemoveContainer" containerID="c12eeb6b5514c431e6633f26b4b62fb527ef75940286b5eb2ed1e213af12264a" Jan 26 14:51:07 crc kubenswrapper[4844]: I0126 14:51:07.315321 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_40715f48-d3b7-4cca-9f3d-cba20a94ed39/probe/0.log" Jan 26 14:51:07 crc kubenswrapper[4844]: I0126 14:51:07.515799 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_40715f48-d3b7-4cca-9f3d-cba20a94ed39/cinder-volume/0.log" Jan 26 14:51:07 crc kubenswrapper[4844]: I0126 14:51:07.694413 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_eacc0803-a775-4eb4-8f3a-a126716ddbb5/probe/0.log" Jan 26 14:51:07 crc kubenswrapper[4844]: I0126 14:51:07.722105 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_eacc0803-a775-4eb4-8f3a-a126716ddbb5/cinder-volume/0.log" Jan 26 14:51:07 crc kubenswrapper[4844]: I0126 14:51:07.822827 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-sjwwh_174270d5-d84e-4b4c-8602-31e455da67db/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:51:07 crc kubenswrapper[4844]: I0126 14:51:07.984871 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-kw9mt_d3c8b898-d97e-461f-85df-f33653e393f7/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:51:08 crc kubenswrapper[4844]: I0126 14:51:08.160765 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-86587fb56f-wskms_3ae83571-dfc8-4d58-bb40-b527756013e7/init/0.log" Jan 26 14:51:08 crc kubenswrapper[4844]: I0126 14:51:08.587129 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-86587fb56f-wskms_3ae83571-dfc8-4d58-bb40-b527756013e7/init/0.log" Jan 26 14:51:08 crc kubenswrapper[4844]: I0126 14:51:08.718259 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-gkkmx_27022163-5166-48e2-afc4-e984baa40303/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:51:08 crc kubenswrapper[4844]: I0126 14:51:08.758211 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-86587fb56f-wskms_3ae83571-dfc8-4d58-bb40-b527756013e7/dnsmasq-dns/0.log" Jan 26 14:51:08 crc kubenswrapper[4844]: I0126 14:51:08.988838 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_65fceb02-1fd4-4b60-a767-f2d232539d43/glance-httpd/0.log" Jan 26 14:51:09 crc kubenswrapper[4844]: I0126 14:51:09.027976 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_65fceb02-1fd4-4b60-a767-f2d232539d43/glance-log/0.log" Jan 26 14:51:09 crc kubenswrapper[4844]: I0126 14:51:09.213509 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_403b5928-19b1-4dfd-97c9-75079d7de60e/glance-httpd/0.log" Jan 26 14:51:09 crc kubenswrapper[4844]: I0126 14:51:09.258803 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_403b5928-19b1-4dfd-97c9-75079d7de60e/glance-log/0.log" Jan 26 14:51:09 crc kubenswrapper[4844]: I0126 14:51:09.527200 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-77c8bf8786-w82f7_a0edac82-6db3-481f-8c9e-8826b5aac863/horizon/0.log" Jan 26 14:51:09 crc kubenswrapper[4844]: I0126 14:51:09.658698 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-9d8r4_e7abb699-d024-4829-8882-7272c3313c67/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:51:09 crc kubenswrapper[4844]: I0126 14:51:09.803164 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-wvxxg_5ecdea0f-9b03-400a-a835-4f93cd02b1de/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:51:10 crc kubenswrapper[4844]: I0126 14:51:10.104818 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490601-dfzsv_9884c612-5868-41be-9d56-ad8f55bc68d6/keystone-cron/0.log" Jan 26 14:51:10 crc kubenswrapper[4844]: I0126 14:51:10.233429 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-77c8bf8786-w82f7_a0edac82-6db3-481f-8c9e-8826b5aac863/horizon-log/0.log" Jan 26 14:51:10 crc kubenswrapper[4844]: I0126 14:51:10.281065 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_0887ff47-06ad-4713-8a39-9cf1d0898a8d/kube-state-metrics/0.log" Jan 26 14:51:10 crc kubenswrapper[4844]: I0126 14:51:10.360542 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-sttdt_2d88214a-d4b9-4885-ac32-cae7c7dcd3ba/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:51:10 crc kubenswrapper[4844]: I0126 14:51:10.473424 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5db4cb7f67-85gvs_d2096862-de7b-4d51-aa62-bc55d339a9dc/keystone-api/0.log" Jan 26 14:51:10 crc kubenswrapper[4844]: I0126 14:51:10.931314 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5fcff84d65-flkjh_91acccd0-7b82-4ee7-afa7-549b7eeae8b6/neutron-api/0.log" Jan 26 14:51:10 crc kubenswrapper[4844]: I0126 14:51:10.939168 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-jjfm4_38602c96-9d47-46f7-b299-c5bfc616ba99/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:51:11 crc kubenswrapper[4844]: I0126 14:51:11.054073 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5fcff84d65-flkjh_91acccd0-7b82-4ee7-afa7-549b7eeae8b6/neutron-httpd/0.log" Jan 26 14:51:11 crc kubenswrapper[4844]: I0126 14:51:11.697871 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_1aa738a6-8d60-4c39-aa86-dc27720dc883/nova-cell0-conductor-conductor/0.log" Jan 26 14:51:12 crc kubenswrapper[4844]: I0126 14:51:12.103732 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_dc7e97d6-1a33-4c98-87bb-6c4d451121b6/nova-cell1-conductor-conductor/0.log" Jan 26 14:51:12 crc kubenswrapper[4844]: I0126 14:51:12.716670 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_81ea8f8d-3955-4fc3-8e6b-412d0bec4995/nova-api-log/0.log" Jan 26 14:51:12 crc kubenswrapper[4844]: I0126 14:51:12.741863 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7bcce5df-9655-46fe-8f82-5f226375500f/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 14:51:12 crc kubenswrapper[4844]: I0126 14:51:12.898546 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-2xrbw_421111b7-6358-404a-b57f-b6529eb910f9/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:51:13 crc kubenswrapper[4844]: I0126 14:51:13.056422 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_86421d71-6636-4491-9b3e-7b4e3bf39ee9/nova-metadata-log/0.log" Jan 26 14:51:13 crc kubenswrapper[4844]: I0126 14:51:13.547557 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_81ea8f8d-3955-4fc3-8e6b-412d0bec4995/nova-api-api/0.log" Jan 26 14:51:13 crc kubenswrapper[4844]: I0126 14:51:13.672086 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_42cc1780-3fb5-4158-95f2-5a1bd4e1161f/nova-scheduler-scheduler/0.log" Jan 26 14:51:13 crc kubenswrapper[4844]: I0126 14:51:13.792547 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f80a52fc-df6a-4218-913e-2ee03174e341/mysql-bootstrap/0.log" Jan 26 14:51:13 crc kubenswrapper[4844]: I0126 14:51:13.933870 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f80a52fc-df6a-4218-913e-2ee03174e341/galera/0.log" Jan 26 14:51:13 crc kubenswrapper[4844]: I0126 14:51:13.964773 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f80a52fc-df6a-4218-913e-2ee03174e341/mysql-bootstrap/0.log" Jan 26 14:51:14 crc kubenswrapper[4844]: I0126 14:51:14.165209 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_7e22ff40-cacd-405d-98f5-f603b17b4e4a/mysql-bootstrap/0.log" Jan 26 14:51:14 crc kubenswrapper[4844]: I0126 14:51:14.321730 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_7e22ff40-cacd-405d-98f5-f603b17b4e4a/mysql-bootstrap/0.log" Jan 26 14:51:14 crc kubenswrapper[4844]: I0126 14:51:14.539045 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d831cf25-12e3-4375-88ae-4ce13c139248/openstackclient/0.log" Jan 26 14:51:14 crc kubenswrapper[4844]: I0126 14:51:14.567708 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_7e22ff40-cacd-405d-98f5-f603b17b4e4a/galera/0.log" Jan 26 14:51:14 crc kubenswrapper[4844]: I0126 14:51:14.736624 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wnqpc_77361a0b-a3eb-49da-971b-705eca5894eb/openstack-network-exporter/0.log" Jan 26 14:51:14 crc kubenswrapper[4844]: I0126 14:51:14.948188 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bq8zv_f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e/ovsdb-server-init/0.log" Jan 26 14:51:15 crc kubenswrapper[4844]: I0126 14:51:15.173988 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bq8zv_f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e/ovsdb-server-init/0.log" Jan 26 14:51:15 crc kubenswrapper[4844]: I0126 14:51:15.252986 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bq8zv_f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e/ovsdb-server/0.log" Jan 26 14:51:15 crc kubenswrapper[4844]: I0126 14:51:15.485302 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vnff8_6696649d-b30c-4ef9-beda-3cec75d656b4/ovn-controller/0.log" Jan 26 14:51:15 crc kubenswrapper[4844]: I0126 14:51:15.614759 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bq8zv_f0224b88-8aeb-4de5-b2d4-2d5f7b69cf8e/ovs-vswitchd/0.log" Jan 26 14:51:15 crc kubenswrapper[4844]: I0126 14:51:15.756674 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-svbzh_5161eb41-8d1f-405a-b40f-630aad7d1925/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:51:15 crc kubenswrapper[4844]: I0126 14:51:15.915473 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a0913fcd-1ca6-46f8-80a8-0c2ced36fea9/openstack-network-exporter/0.log" Jan 26 14:51:16 crc kubenswrapper[4844]: I0126 14:51:16.003004 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a0913fcd-1ca6-46f8-80a8-0c2ced36fea9/ovn-northd/0.log" Jan 26 14:51:16 crc kubenswrapper[4844]: I0126 14:51:16.181567 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_490e8905-58e4-44a6-a4a4-ea873a5eaa94/openstack-network-exporter/0.log" Jan 26 14:51:16 crc kubenswrapper[4844]: I0126 14:51:16.209582 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_490e8905-58e4-44a6-a4a4-ea873a5eaa94/ovsdbserver-nb/0.log" Jan 26 14:51:16 crc kubenswrapper[4844]: I0126 14:51:16.393818 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_86421d71-6636-4491-9b3e-7b4e3bf39ee9/nova-metadata-metadata/0.log" Jan 26 14:51:16 crc kubenswrapper[4844]: I0126 14:51:16.419828 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6b89a5fa-2181-432a-a613-6bbeeb0f56bb/openstack-network-exporter/0.log" Jan 26 14:51:16 crc kubenswrapper[4844]: I0126 14:51:16.498999 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6b89a5fa-2181-432a-a613-6bbeeb0f56bb/ovsdbserver-sb/0.log" Jan 26 14:51:17 crc kubenswrapper[4844]: I0126 14:51:17.122565 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fcca7d88-f1d4-463b-a412-ecfee5f8724d/init-config-reloader/0.log" Jan 26 14:51:17 crc kubenswrapper[4844]: I0126 14:51:17.132913 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7ff9fb4f5b-dz4mq_624dd95f-3ed5-4837-908b-b5e6d47a1edf/placement-api/0.log" Jan 26 14:51:17 crc kubenswrapper[4844]: I0126 14:51:17.146949 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7ff9fb4f5b-dz4mq_624dd95f-3ed5-4837-908b-b5e6d47a1edf/placement-log/0.log" Jan 26 14:51:17 crc kubenswrapper[4844]: I0126 14:51:17.328289 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fcca7d88-f1d4-463b-a412-ecfee5f8724d/config-reloader/0.log" Jan 26 14:51:17 crc kubenswrapper[4844]: I0126 14:51:17.377801 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fcca7d88-f1d4-463b-a412-ecfee5f8724d/init-config-reloader/0.log" Jan 26 14:51:17 crc kubenswrapper[4844]: I0126 14:51:17.383845 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fcca7d88-f1d4-463b-a412-ecfee5f8724d/prometheus/0.log" Jan 26 14:51:17 crc kubenswrapper[4844]: I0126 14:51:17.423289 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fcca7d88-f1d4-463b-a412-ecfee5f8724d/thanos-sidecar/0.log" Jan 26 14:51:17 crc kubenswrapper[4844]: I0126 14:51:17.589020 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_463d25b4-7819-4947-925d-74c429093694/setup-container/0.log" Jan 26 14:51:17 crc kubenswrapper[4844]: I0126 14:51:17.740887 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_463d25b4-7819-4947-925d-74c429093694/setup-container/0.log" Jan 26 14:51:17 crc kubenswrapper[4844]: I0126 14:51:17.862771 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_463d25b4-7819-4947-925d-74c429093694/rabbitmq/0.log" Jan 26 14:51:17 crc kubenswrapper[4844]: I0126 14:51:17.884465 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_185637e1-efed-452c-ba52-7688909bad2c/setup-container/0.log" Jan 26 14:51:18 crc kubenswrapper[4844]: I0126 14:51:18.136006 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_185637e1-efed-452c-ba52-7688909bad2c/rabbitmq/0.log" Jan 26 14:51:18 crc kubenswrapper[4844]: I0126 14:51:18.159742 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_185637e1-efed-452c-ba52-7688909bad2c/setup-container/0.log" Jan 26 14:51:18 crc kubenswrapper[4844]: I0126 14:51:18.177475 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_38e1fc4a-33a4-443e-95bb-3e653d3f1a59/setup-container/0.log" Jan 26 14:51:18 crc kubenswrapper[4844]: I0126 14:51:18.484398 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_38e1fc4a-33a4-443e-95bb-3e653d3f1a59/setup-container/0.log" Jan 26 14:51:18 crc kubenswrapper[4844]: I0126 14:51:18.505465 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_38e1fc4a-33a4-443e-95bb-3e653d3f1a59/rabbitmq/0.log" Jan 26 14:51:18 crc kubenswrapper[4844]: I0126 14:51:18.519026 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-k8c2z_342e7682-6393-4c70-9c22-5108b5473dc0/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:51:18 crc kubenswrapper[4844]: I0126 14:51:18.797627 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4z6gd_e02f083a-8dcb-4454-8050-752c996dadd7/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:51:18 crc kubenswrapper[4844]: I0126 14:51:18.824017 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-tqkgn_d135fda9-894e-41c5-94a3-57aca842c386/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:51:19 crc kubenswrapper[4844]: I0126 14:51:19.021923 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8qp5q_3ff365e7-065a-41e7-a3cc-642e66989dc9/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:51:19 crc kubenswrapper[4844]: I0126 14:51:19.054179 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-4fkj8_d45310a6-48b5-455c-960c-5aaaa0a5b469/ssh-known-hosts-edpm-deployment/0.log" Jan 26 14:51:19 crc kubenswrapper[4844]: I0126 14:51:19.299094 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5d969b7b55-l9p8p_e8e7e0c6-a150-4957-8e36-2f75d269e203/proxy-server/0.log" Jan 26 14:51:19 crc kubenswrapper[4844]: I0126 14:51:19.521455 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5d969b7b55-l9p8p_e8e7e0c6-a150-4957-8e36-2f75d269e203/proxy-httpd/0.log" Jan 26 14:51:19 crc kubenswrapper[4844]: I0126 14:51:19.543799 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dh9kj_82fe3a1a-10c2-4378-a36b-b42131a2df4d/swift-ring-rebalance/0.log" Jan 26 14:51:19 crc kubenswrapper[4844]: I0126 14:51:19.669276 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/account-auditor/0.log" Jan 26 14:51:19 crc kubenswrapper[4844]: I0126 14:51:19.761439 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/account-reaper/0.log" Jan 26 14:51:19 crc kubenswrapper[4844]: I0126 14:51:19.864270 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/account-replicator/0.log" Jan 26 14:51:19 crc kubenswrapper[4844]: I0126 14:51:19.944246 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/container-auditor/0.log" Jan 26 14:51:19 crc kubenswrapper[4844]: I0126 14:51:19.956753 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/account-server/0.log" Jan 26 14:51:20 crc kubenswrapper[4844]: I0126 14:51:20.026551 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/container-replicator/0.log" Jan 26 14:51:20 crc kubenswrapper[4844]: I0126 14:51:20.125463 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/container-server/0.log" Jan 26 14:51:20 crc kubenswrapper[4844]: I0126 14:51:20.172430 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/container-updater/0.log" Jan 26 14:51:20 crc kubenswrapper[4844]: I0126 14:51:20.215030 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/object-auditor/0.log" Jan 26 14:51:20 crc kubenswrapper[4844]: I0126 14:51:20.261652 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/object-expirer/0.log" Jan 26 14:51:20 crc kubenswrapper[4844]: I0126 14:51:20.384713 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/object-server/0.log" Jan 26 14:51:20 crc kubenswrapper[4844]: I0126 14:51:20.410818 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/object-replicator/0.log" Jan 26 14:51:20 crc kubenswrapper[4844]: I0126 14:51:20.450934 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/rsync/0.log" Jan 26 14:51:20 crc kubenswrapper[4844]: I0126 14:51:20.511472 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/object-updater/0.log" Jan 26 14:51:20 crc kubenswrapper[4844]: I0126 14:51:20.835512 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8606256a-c070-4b18-906b-a4557edd45e7/swift-recon-cron/0.log" Jan 26 14:51:20 crc kubenswrapper[4844]: I0126 14:51:20.987168 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-dc9zd_28d2f4e7-9d62-41ba-88db-fc0591ec6d43/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:51:21 crc kubenswrapper[4844]: I0126 14:51:21.178139 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_f617457c-8f1e-4508-926e-bb6b77ea7444/tempest-tests-tempest-tests-runner/0.log" Jan 26 14:51:21 crc kubenswrapper[4844]: I0126 14:51:21.193777 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_a4920a59-74e4-4ac3-b437-3dbd074758d7/test-operator-logs-container/0.log" Jan 26 14:51:21 crc kubenswrapper[4844]: I0126 14:51:21.468435 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-br56n_5a2f9b87-b8bf-456e-84a4-6e1736d30419/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 14:51:22 crc kubenswrapper[4844]: I0126 14:51:22.214987 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_75853a49-c21a-4df8-bcdf-0b160524e203/watcher-applier/0.log" Jan 26 14:51:22 crc kubenswrapper[4844]: I0126 14:51:22.797803 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_33ecc4c6-320a-41d8-a7c2-608bdda02b0a/watcher-api-log/0.log" Jan 26 14:51:25 crc kubenswrapper[4844]: I0126 14:51:25.854042 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_fdfa0abd-53cc-4cd5-9dd0-8d6571ba0fea/watcher-decision-engine/0.log" Jan 26 14:51:27 crc kubenswrapper[4844]: I0126 14:51:27.687771 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_33ecc4c6-320a-41d8-a7c2-608bdda02b0a/watcher-api/0.log" Jan 26 14:51:31 crc kubenswrapper[4844]: I0126 14:51:31.098194 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f2bd5019-39c7-4b78-8610-4a7db01f5a85/memcached/0.log" Jan 26 14:51:50 crc kubenswrapper[4844]: I0126 14:51:50.180513 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq_22fcada7-92af-4edd-903e-8706cffecc6c/util/0.log" Jan 26 14:51:50 crc kubenswrapper[4844]: I0126 14:51:50.319147 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq_22fcada7-92af-4edd-903e-8706cffecc6c/util/0.log" Jan 26 14:51:50 crc kubenswrapper[4844]: I0126 14:51:50.369561 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq_22fcada7-92af-4edd-903e-8706cffecc6c/pull/0.log" Jan 26 14:51:50 crc kubenswrapper[4844]: I0126 14:51:50.391134 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq_22fcada7-92af-4edd-903e-8706cffecc6c/pull/0.log" Jan 26 14:51:50 crc kubenswrapper[4844]: I0126 14:51:50.567954 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq_22fcada7-92af-4edd-903e-8706cffecc6c/util/0.log" Jan 26 14:51:50 crc kubenswrapper[4844]: I0126 14:51:50.568454 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq_22fcada7-92af-4edd-903e-8706cffecc6c/pull/0.log" Jan 26 14:51:50 crc kubenswrapper[4844]: I0126 14:51:50.573254 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc6200ab3f1714125386de3d0e34486afaa28bf51d7ef9eb7880e967ef6vsq_22fcada7-92af-4edd-903e-8706cffecc6c/extract/0.log" Jan 26 14:51:50 crc kubenswrapper[4844]: I0126 14:51:50.833409 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-sm4lj_aa463929-97db-4af2-8308-840d51ae717a/manager/0.log" Jan 26 14:51:50 crc kubenswrapper[4844]: I0126 14:51:50.839782 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-5tq86_a29e2eac-c303-4ae6-9c3b-439a258ce420/manager/0.log" Jan 26 14:51:50 crc kubenswrapper[4844]: I0126 14:51:50.985898 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-gmfsm_c39cee42-2147-463f-90f5-62b0ad31ec96/manager/0.log" Jan 26 14:51:51 crc kubenswrapper[4844]: I0126 14:51:51.035378 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-mwszm_f8b1471a-3483-4c9e-b662-02906d9b18c0/manager/0.log" Jan 26 14:51:51 crc kubenswrapper[4844]: I0126 14:51:51.210837 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-k8f6n_9de97e7e-c381-4f7d-9380-9aadf848b3a6/manager/0.log" Jan 26 14:51:51 crc kubenswrapper[4844]: I0126 14:51:51.278016 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-rk7rt_981956b6-e5c7-4908-a72d-458026f29e4d/manager/0.log" Jan 26 14:51:51 crc kubenswrapper[4844]: I0126 14:51:51.533910 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-krn66_1eca115f-b8cd-4a50-8adc-2d31e297657f/manager/0.log" Jan 26 14:51:51 crc kubenswrapper[4844]: I0126 14:51:51.719133 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-vzncj_8b9f2639-4aaa-463a-b950-fc39fca31805/manager/0.log" Jan 26 14:51:51 crc kubenswrapper[4844]: I0126 14:51:51.804692 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-ht7r9_a60ef848-810d-4c2c-8c23-341d8168e7e7/manager/0.log" Jan 26 14:51:51 crc kubenswrapper[4844]: I0126 14:51:51.860340 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-wtp6f_2a343b60-ecc4-4634-9a54-7814555dd3bc/manager/0.log" Jan 26 14:51:52 crc kubenswrapper[4844]: I0126 14:51:52.031759 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-bcdf4_154eb771-ca89-43f9-b002-e6f11d943cbe/manager/0.log" Jan 26 14:51:52 crc kubenswrapper[4844]: I0126 14:51:52.113164 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-pffmq_8ac12453-5418-4c50-8b2a-61dfad6bf1e1/manager/0.log" Jan 26 14:51:52 crc kubenswrapper[4844]: I0126 14:51:52.301216 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-x5shx_73721700-0f73-468c-9c69-2d3f078a7516/manager/0.log" Jan 26 14:51:52 crc kubenswrapper[4844]: I0126 14:51:52.336534 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-566vm_4bf529eb-b7b9-4ca7-a55a-73fd7d58ac81/manager/0.log" Jan 26 14:51:52 crc kubenswrapper[4844]: I0126 14:51:52.410095 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b85478v8f_12e4b3b0-81a4-4752-8cea-e1a3178d38ba/manager/0.log" Jan 26 14:51:52 crc kubenswrapper[4844]: I0126 14:51:52.662693 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-54d8cfbbfb-9bfgj_d2118529-9df3-486e-9f15-3a54c55d9eb1/operator/0.log" Jan 26 14:51:52 crc kubenswrapper[4844]: I0126 14:51:52.942076 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nql7g_bfb7276b-b13e-43c2-ae22-0165b6e3a68f/registry-server/0.log" Jan 26 14:51:53 crc kubenswrapper[4844]: I0126 14:51:53.135961 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-l7w8f_89ab862c-0d6a-4a44-9f28-9195e0213328/manager/0.log" Jan 26 14:51:53 crc kubenswrapper[4844]: I0126 14:51:53.296408 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-mkcr9_3a13e1fa-35b1-4adc-a21d-a09aa4ec91a7/manager/0.log" Jan 26 14:51:53 crc kubenswrapper[4844]: I0126 14:51:53.738127 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-8s4vt_e99dde4f-0ab1-45ad-b6c0-e5225fbfc77d/operator/0.log" Jan 26 14:51:53 crc kubenswrapper[4844]: I0126 14:51:53.902141 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-88kvh_00b0af83-1dea-44ab-b074-fa7b5c9cf46d/manager/0.log" Jan 26 14:51:53 crc kubenswrapper[4844]: I0126 14:51:53.928518 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6b75585dc8-tzrcv_dd52b1ad-222e-4b57-91e0-869bd8094adc/manager/0.log" Jan 26 14:51:54 crc kubenswrapper[4844]: I0126 14:51:54.156021 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-fj29j_9fb0454b-90d4-48f3-b069-86aada20e9f9/manager/0.log" Jan 26 14:51:54 crc kubenswrapper[4844]: I0126 14:51:54.217462 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-dgglg_915eea77-c5eb-4e5c-b9f2-404ba732dac8/manager/0.log" Jan 26 14:51:54 crc kubenswrapper[4844]: I0126 14:51:54.394951 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5fc5788b68-9qjpz_c74ba998-8b13-4a63-a4b3-d027f70ff41d/manager/0.log" Jan 26 14:52:13 crc kubenswrapper[4844]: I0126 14:52:13.248377 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qltc7_10b7b789-0c46-4e84-875e-f74c68981bca/control-plane-machine-set-operator/0.log" Jan 26 14:52:13 crc kubenswrapper[4844]: I0126 14:52:13.528165 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zsn9c_4fd9b862-74de-4579-9b30-b51e5cbd3b56/kube-rbac-proxy/0.log" Jan 26 14:52:13 crc kubenswrapper[4844]: I0126 14:52:13.537287 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zsn9c_4fd9b862-74de-4579-9b30-b51e5cbd3b56/machine-api-operator/0.log" Jan 26 14:52:25 crc kubenswrapper[4844]: I0126 14:52:25.836094 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vhvzj_65d6aa35-f205-43c2-ad68-0bfa252093be/cert-manager-controller/0.log" Jan 26 14:52:26 crc kubenswrapper[4844]: I0126 14:52:26.021619 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-dv29d_a25263f7-0e4e-4253-abe6-20b223dc600e/cert-manager-cainjector/0.log" Jan 26 14:52:26 crc kubenswrapper[4844]: I0126 14:52:26.042349 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-7xbzs_97f29a7d-977c-41c6-8756-d6e5d6a35875/cert-manager-webhook/0.log" Jan 26 14:52:38 crc kubenswrapper[4844]: I0126 14:52:38.695345 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-qdxvv_213e48c5-2b34-4d8a-af54-773da9caddb5/nmstate-console-plugin/0.log" Jan 26 14:52:38 crc kubenswrapper[4844]: I0126 14:52:38.863343 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-2d462_9baf25b3-6096-4215-9455-b9126c02ffcf/nmstate-handler/0.log" Jan 26 14:52:38 crc kubenswrapper[4844]: I0126 14:52:38.938720 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vgnf8_bcef572e-5718-4586-b0e3-907551cdf0ff/kube-rbac-proxy/0.log" Jan 26 14:52:39 crc kubenswrapper[4844]: I0126 14:52:39.101190 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vgnf8_bcef572e-5718-4586-b0e3-907551cdf0ff/nmstate-metrics/0.log" Jan 26 14:52:39 crc kubenswrapper[4844]: I0126 14:52:39.158741 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-9djrz_0c0a3ca8-870a-4c95-a1a0-002e4cdb3bb8/nmstate-operator/0.log" Jan 26 14:52:39 crc kubenswrapper[4844]: I0126 14:52:39.306589 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-blwvj_68790915-1674-4d77-8d03-d21698da101e/nmstate-webhook/0.log" Jan 26 14:52:53 crc kubenswrapper[4844]: I0126 14:52:53.095569 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-dg7zb_1dec1dad-33cd-4ea8-9f69-9e69e0f56e73/prometheus-operator/0.log" Jan 26 14:52:53 crc kubenswrapper[4844]: I0126 14:52:53.282665 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b87948799-68hvv_321b4c21-0d4a-49d5-a14a-9f49e2ea5600/prometheus-operator-admission-webhook/0.log" Jan 26 14:52:53 crc kubenswrapper[4844]: I0126 14:52:53.325116 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b87948799-mvsq5_b2533187-bdf5-44b9-a05d-ceb2e2ea467b/prometheus-operator-admission-webhook/0.log" Jan 26 14:52:53 crc kubenswrapper[4844]: I0126 14:52:53.481766 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-clgj9_50efd8fd-16d6-4d82-a9f0-ea82c4d50c4c/operator/0.log" Jan 26 14:52:53 crc kubenswrapper[4844]: I0126 14:52:53.513915 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-sjw9j_a9734a40-f918-40da-9931-7d55904a646a/perses-operator/0.log" Jan 26 14:53:06 crc kubenswrapper[4844]: I0126 14:53:06.364648 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:53:06 crc kubenswrapper[4844]: I0126 14:53:06.365253 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:53:06 crc kubenswrapper[4844]: I0126 14:53:06.624331 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6qx7f_a5381cf1-7e94-4ac0-9054-ed80ebf76624/kube-rbac-proxy/0.log" Jan 26 14:53:06 crc kubenswrapper[4844]: I0126 14:53:06.711860 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6qx7f_a5381cf1-7e94-4ac0-9054-ed80ebf76624/controller/0.log" Jan 26 14:53:06 crc kubenswrapper[4844]: I0126 14:53:06.928065 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-frr-files/0.log" Jan 26 14:53:07 crc kubenswrapper[4844]: I0126 14:53:07.104503 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-reloader/0.log" Jan 26 14:53:07 crc kubenswrapper[4844]: I0126 14:53:07.107393 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-frr-files/0.log" Jan 26 14:53:07 crc kubenswrapper[4844]: I0126 14:53:07.152098 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-reloader/0.log" Jan 26 14:53:07 crc kubenswrapper[4844]: I0126 14:53:07.172050 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-metrics/0.log" Jan 26 14:53:07 crc kubenswrapper[4844]: I0126 14:53:07.351412 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-metrics/0.log" Jan 26 14:53:07 crc kubenswrapper[4844]: I0126 14:53:07.363068 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-reloader/0.log" Jan 26 14:53:07 crc kubenswrapper[4844]: I0126 14:53:07.373834 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-metrics/0.log" Jan 26 14:53:07 crc kubenswrapper[4844]: I0126 14:53:07.403809 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-frr-files/0.log" Jan 26 14:53:07 crc kubenswrapper[4844]: I0126 14:53:07.577651 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-frr-files/0.log" Jan 26 14:53:07 crc kubenswrapper[4844]: I0126 14:53:07.577668 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-reloader/0.log" Jan 26 14:53:07 crc kubenswrapper[4844]: I0126 14:53:07.628467 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/cp-metrics/0.log" Jan 26 14:53:07 crc kubenswrapper[4844]: I0126 14:53:07.661042 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/controller/0.log" Jan 26 14:53:07 crc kubenswrapper[4844]: I0126 14:53:07.758109 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/frr-metrics/0.log" Jan 26 14:53:07 crc kubenswrapper[4844]: I0126 14:53:07.816236 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/kube-rbac-proxy/0.log" Jan 26 14:53:07 crc kubenswrapper[4844]: I0126 14:53:07.869889 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/kube-rbac-proxy-frr/0.log" Jan 26 14:53:08 crc kubenswrapper[4844]: I0126 14:53:08.021967 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/reloader/0.log" Jan 26 14:53:08 crc kubenswrapper[4844]: I0126 14:53:08.092333 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-5tzp4_08638bb5-906c-4f51-9437-8667d323feae/frr-k8s-webhook-server/0.log" Jan 26 14:53:08 crc kubenswrapper[4844]: I0126 14:53:08.332501 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-59ccf49fff-tmmnh_03a2059f-ed6b-49f5-9476-bf21d424567f/manager/0.log" Jan 26 14:53:08 crc kubenswrapper[4844]: I0126 14:53:08.522434 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-56567ff486-jdjng_2d1458da-4eb4-4e5a-ae05-399cb9e40dda/webhook-server/0.log" Jan 26 14:53:08 crc kubenswrapper[4844]: I0126 14:53:08.598031 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qtw5d_eadfd892-6882-4514-abcd-e68612f9eecf/kube-rbac-proxy/0.log" Jan 26 14:53:09 crc kubenswrapper[4844]: I0126 14:53:09.228126 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qtw5d_eadfd892-6882-4514-abcd-e68612f9eecf/speaker/0.log" Jan 26 14:53:09 crc kubenswrapper[4844]: I0126 14:53:09.881659 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wgh7_a82f578e-e9b6-4a4d-aade-25ba70bac11f/frr/0.log" Jan 26 14:53:22 crc kubenswrapper[4844]: I0126 14:53:22.126623 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j_a04410f5-0ebb-4519-9806-a0210b9fdfdc/util/0.log" Jan 26 14:53:22 crc kubenswrapper[4844]: I0126 14:53:22.382249 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j_a04410f5-0ebb-4519-9806-a0210b9fdfdc/pull/0.log" Jan 26 14:53:22 crc kubenswrapper[4844]: I0126 14:53:22.421865 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j_a04410f5-0ebb-4519-9806-a0210b9fdfdc/pull/0.log" Jan 26 14:53:22 crc kubenswrapper[4844]: I0126 14:53:22.454170 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j_a04410f5-0ebb-4519-9806-a0210b9fdfdc/util/0.log" Jan 26 14:53:22 crc kubenswrapper[4844]: I0126 14:53:22.636064 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j_a04410f5-0ebb-4519-9806-a0210b9fdfdc/pull/0.log" Jan 26 14:53:22 crc kubenswrapper[4844]: I0126 14:53:22.644613 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j_a04410f5-0ebb-4519-9806-a0210b9fdfdc/extract/0.log" Jan 26 14:53:22 crc kubenswrapper[4844]: I0126 14:53:22.669325 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqgt7j_a04410f5-0ebb-4519-9806-a0210b9fdfdc/util/0.log" Jan 26 14:53:22 crc kubenswrapper[4844]: I0126 14:53:22.839025 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2_b2b5f908-45d0-4977-93ce-6e5842a166cc/util/0.log" Jan 26 14:53:23 crc kubenswrapper[4844]: I0126 14:53:23.006256 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2_b2b5f908-45d0-4977-93ce-6e5842a166cc/util/0.log" Jan 26 14:53:23 crc kubenswrapper[4844]: I0126 14:53:23.025692 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2_b2b5f908-45d0-4977-93ce-6e5842a166cc/pull/0.log" Jan 26 14:53:23 crc kubenswrapper[4844]: I0126 14:53:23.073164 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2_b2b5f908-45d0-4977-93ce-6e5842a166cc/pull/0.log" Jan 26 14:53:23 crc kubenswrapper[4844]: I0126 14:53:23.197199 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2_b2b5f908-45d0-4977-93ce-6e5842a166cc/util/0.log" Jan 26 14:53:23 crc kubenswrapper[4844]: I0126 14:53:23.217964 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2_b2b5f908-45d0-4977-93ce-6e5842a166cc/pull/0.log" Jan 26 14:53:23 crc kubenswrapper[4844]: I0126 14:53:23.225355 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d2kj2_b2b5f908-45d0-4977-93ce-6e5842a166cc/extract/0.log" Jan 26 14:53:23 crc kubenswrapper[4844]: I0126 14:53:23.380525 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh_bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc/util/0.log" Jan 26 14:53:23 crc kubenswrapper[4844]: I0126 14:53:23.626703 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh_bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc/pull/0.log" Jan 26 14:53:23 crc kubenswrapper[4844]: I0126 14:53:23.644212 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh_bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc/util/0.log" Jan 26 14:53:23 crc kubenswrapper[4844]: I0126 14:53:23.645993 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh_bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc/pull/0.log" Jan 26 14:53:23 crc kubenswrapper[4844]: I0126 14:53:23.806010 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh_bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc/pull/0.log" Jan 26 14:53:23 crc kubenswrapper[4844]: I0126 14:53:23.817844 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh_bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc/extract/0.log" Jan 26 14:53:23 crc kubenswrapper[4844]: I0126 14:53:23.833112 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08vhrwh_bc9484bc-f8ef-463e-8d9e-c7d6e7f02cdc/util/0.log" Jan 26 14:53:23 crc kubenswrapper[4844]: I0126 14:53:23.994695 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bnfk2_be6c04bd-58fc-41e9-bdfa-facc3fc12358/extract-utilities/0.log" Jan 26 14:53:24 crc kubenswrapper[4844]: I0126 14:53:24.177894 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bnfk2_be6c04bd-58fc-41e9-bdfa-facc3fc12358/extract-content/0.log" Jan 26 14:53:24 crc kubenswrapper[4844]: I0126 14:53:24.180842 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bnfk2_be6c04bd-58fc-41e9-bdfa-facc3fc12358/extract-utilities/0.log" Jan 26 14:53:24 crc kubenswrapper[4844]: I0126 14:53:24.203815 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bnfk2_be6c04bd-58fc-41e9-bdfa-facc3fc12358/extract-content/0.log" Jan 26 14:53:24 crc kubenswrapper[4844]: I0126 14:53:24.354581 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bnfk2_be6c04bd-58fc-41e9-bdfa-facc3fc12358/extract-utilities/0.log" Jan 26 14:53:24 crc kubenswrapper[4844]: I0126 14:53:24.380816 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bnfk2_be6c04bd-58fc-41e9-bdfa-facc3fc12358/extract-content/0.log" Jan 26 14:53:24 crc kubenswrapper[4844]: I0126 14:53:24.635162 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9ckcc_5fab62d0-54ca-4d28-b84b-5c66d8bf0887/extract-utilities/0.log" Jan 26 14:53:24 crc kubenswrapper[4844]: I0126 14:53:24.837970 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9ckcc_5fab62d0-54ca-4d28-b84b-5c66d8bf0887/extract-utilities/0.log" Jan 26 14:53:24 crc kubenswrapper[4844]: I0126 14:53:24.878185 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9ckcc_5fab62d0-54ca-4d28-b84b-5c66d8bf0887/extract-content/0.log" Jan 26 14:53:24 crc kubenswrapper[4844]: I0126 14:53:24.916132 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9ckcc_5fab62d0-54ca-4d28-b84b-5c66d8bf0887/extract-content/0.log" Jan 26 14:53:25 crc kubenswrapper[4844]: I0126 14:53:25.215095 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9ckcc_5fab62d0-54ca-4d28-b84b-5c66d8bf0887/extract-content/0.log" Jan 26 14:53:25 crc kubenswrapper[4844]: I0126 14:53:25.229432 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9ckcc_5fab62d0-54ca-4d28-b84b-5c66d8bf0887/extract-utilities/0.log" Jan 26 14:53:25 crc kubenswrapper[4844]: I0126 14:53:25.354918 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bnfk2_be6c04bd-58fc-41e9-bdfa-facc3fc12358/registry-server/0.log" Jan 26 14:53:25 crc kubenswrapper[4844]: I0126 14:53:25.493100 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-q4p7z_5374369b-4aee-4c66-98fe-7bb183b4fdfa/marketplace-operator/0.log" Jan 26 14:53:25 crc kubenswrapper[4844]: I0126 14:53:25.723388 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jjx57_a4779355-4fd0-4b1d-adef-3e4ebba15903/extract-utilities/0.log" Jan 26 14:53:25 crc kubenswrapper[4844]: I0126 14:53:25.955765 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jjx57_a4779355-4fd0-4b1d-adef-3e4ebba15903/extract-content/0.log" Jan 26 14:53:25 crc kubenswrapper[4844]: I0126 14:53:25.993663 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jjx57_a4779355-4fd0-4b1d-adef-3e4ebba15903/extract-content/0.log" Jan 26 14:53:26 crc kubenswrapper[4844]: I0126 14:53:26.017423 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jjx57_a4779355-4fd0-4b1d-adef-3e4ebba15903/extract-utilities/0.log" Jan 26 14:53:26 crc kubenswrapper[4844]: I0126 14:53:26.228742 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jjx57_a4779355-4fd0-4b1d-adef-3e4ebba15903/extract-utilities/0.log" Jan 26 14:53:26 crc kubenswrapper[4844]: I0126 14:53:26.302785 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jjx57_a4779355-4fd0-4b1d-adef-3e4ebba15903/extract-content/0.log" Jan 26 14:53:26 crc kubenswrapper[4844]: I0126 14:53:26.383047 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9ckcc_5fab62d0-54ca-4d28-b84b-5c66d8bf0887/registry-server/0.log" Jan 26 14:53:26 crc kubenswrapper[4844]: I0126 14:53:26.595862 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m8rzx_9cf02a58-0976-482c-9e29-b8cb52254a3b/extract-utilities/0.log" Jan 26 14:53:26 crc kubenswrapper[4844]: I0126 14:53:26.600822 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jjx57_a4779355-4fd0-4b1d-adef-3e4ebba15903/registry-server/0.log" Jan 26 14:53:26 crc kubenswrapper[4844]: I0126 14:53:26.768701 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m8rzx_9cf02a58-0976-482c-9e29-b8cb52254a3b/extract-content/0.log" Jan 26 14:53:26 crc kubenswrapper[4844]: I0126 14:53:26.784155 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m8rzx_9cf02a58-0976-482c-9e29-b8cb52254a3b/extract-utilities/0.log" Jan 26 14:53:26 crc kubenswrapper[4844]: I0126 14:53:26.790868 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m8rzx_9cf02a58-0976-482c-9e29-b8cb52254a3b/extract-content/0.log" Jan 26 14:53:26 crc kubenswrapper[4844]: I0126 14:53:26.930993 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m8rzx_9cf02a58-0976-482c-9e29-b8cb52254a3b/extract-utilities/0.log" Jan 26 14:53:26 crc kubenswrapper[4844]: I0126 14:53:26.955406 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m8rzx_9cf02a58-0976-482c-9e29-b8cb52254a3b/extract-content/0.log" Jan 26 14:53:27 crc kubenswrapper[4844]: I0126 14:53:27.892571 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m8rzx_9cf02a58-0976-482c-9e29-b8cb52254a3b/registry-server/0.log" Jan 26 14:53:36 crc kubenswrapper[4844]: I0126 14:53:36.365186 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:53:36 crc kubenswrapper[4844]: I0126 14:53:36.365733 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:53:39 crc kubenswrapper[4844]: I0126 14:53:39.619535 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b87948799-mvsq5_b2533187-bdf5-44b9-a05d-ceb2e2ea467b/prometheus-operator-admission-webhook/0.log" Jan 26 14:53:39 crc kubenswrapper[4844]: I0126 14:53:39.659301 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b87948799-68hvv_321b4c21-0d4a-49d5-a14a-9f49e2ea5600/prometheus-operator-admission-webhook/0.log" Jan 26 14:53:39 crc kubenswrapper[4844]: I0126 14:53:39.669350 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-dg7zb_1dec1dad-33cd-4ea8-9f69-9e69e0f56e73/prometheus-operator/0.log" Jan 26 14:53:39 crc kubenswrapper[4844]: I0126 14:53:39.834588 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-clgj9_50efd8fd-16d6-4d82-a9f0-ea82c4d50c4c/operator/0.log" Jan 26 14:53:39 crc kubenswrapper[4844]: I0126 14:53:39.885517 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-sjw9j_a9734a40-f918-40da-9931-7d55904a646a/perses-operator/0.log" Jan 26 14:54:06 crc kubenswrapper[4844]: I0126 14:54:06.365004 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:54:06 crc kubenswrapper[4844]: I0126 14:54:06.365784 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:54:06 crc kubenswrapper[4844]: I0126 14:54:06.365856 4844 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" Jan 26 14:54:06 crc kubenswrapper[4844]: I0126 14:54:06.367071 4844 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e"} pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:54:06 crc kubenswrapper[4844]: I0126 14:54:06.367189 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" containerID="cri-o://11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" gracePeriod=600 Jan 26 14:54:07 crc kubenswrapper[4844]: E0126 14:54:07.010478 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:54:07 crc kubenswrapper[4844]: I0126 14:54:07.035070 4844 generic.go:334] "Generic (PLEG): container finished" podID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" exitCode=0 Jan 26 14:54:07 crc kubenswrapper[4844]: I0126 14:54:07.035119 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerDied","Data":"11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e"} Jan 26 14:54:07 crc kubenswrapper[4844]: I0126 14:54:07.035158 4844 scope.go:117] "RemoveContainer" containerID="f1d9ed368bf3314f0fcf60a78822b0e13b92dd28c5522c99c46976afa4696e06" Jan 26 14:54:07 crc kubenswrapper[4844]: I0126 14:54:07.036070 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:54:07 crc kubenswrapper[4844]: E0126 14:54:07.036388 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:54:20 crc kubenswrapper[4844]: I0126 14:54:20.314274 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:54:20 crc kubenswrapper[4844]: E0126 14:54:20.315042 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:54:32 crc kubenswrapper[4844]: I0126 14:54:32.883171 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ssbxj"] Jan 26 14:54:32 crc kubenswrapper[4844]: E0126 14:54:32.884237 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c110c78-5474-48ff-8aac-dd6f56ce0426" containerName="container-00" Jan 26 14:54:32 crc kubenswrapper[4844]: I0126 14:54:32.884257 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c110c78-5474-48ff-8aac-dd6f56ce0426" containerName="container-00" Jan 26 14:54:32 crc kubenswrapper[4844]: I0126 14:54:32.884548 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c110c78-5474-48ff-8aac-dd6f56ce0426" containerName="container-00" Jan 26 14:54:32 crc kubenswrapper[4844]: I0126 14:54:32.887425 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ssbxj" Jan 26 14:54:32 crc kubenswrapper[4844]: I0126 14:54:32.890591 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ssbxj"] Jan 26 14:54:33 crc kubenswrapper[4844]: I0126 14:54:33.020477 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tfnv\" (UniqueName: \"kubernetes.io/projected/163bb4dc-817f-4696-897a-c1fe4b0f09f8-kube-api-access-5tfnv\") pod \"redhat-marketplace-ssbxj\" (UID: \"163bb4dc-817f-4696-897a-c1fe4b0f09f8\") " pod="openshift-marketplace/redhat-marketplace-ssbxj" Jan 26 14:54:33 crc kubenswrapper[4844]: I0126 14:54:33.020544 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163bb4dc-817f-4696-897a-c1fe4b0f09f8-utilities\") pod \"redhat-marketplace-ssbxj\" (UID: \"163bb4dc-817f-4696-897a-c1fe4b0f09f8\") " pod="openshift-marketplace/redhat-marketplace-ssbxj" Jan 26 14:54:33 crc kubenswrapper[4844]: I0126 14:54:33.020611 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163bb4dc-817f-4696-897a-c1fe4b0f09f8-catalog-content\") pod \"redhat-marketplace-ssbxj\" (UID: \"163bb4dc-817f-4696-897a-c1fe4b0f09f8\") " pod="openshift-marketplace/redhat-marketplace-ssbxj" Jan 26 14:54:33 crc kubenswrapper[4844]: I0126 14:54:33.122961 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tfnv\" (UniqueName: \"kubernetes.io/projected/163bb4dc-817f-4696-897a-c1fe4b0f09f8-kube-api-access-5tfnv\") pod \"redhat-marketplace-ssbxj\" (UID: \"163bb4dc-817f-4696-897a-c1fe4b0f09f8\") " pod="openshift-marketplace/redhat-marketplace-ssbxj" Jan 26 14:54:33 crc kubenswrapper[4844]: I0126 14:54:33.123036 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163bb4dc-817f-4696-897a-c1fe4b0f09f8-utilities\") pod \"redhat-marketplace-ssbxj\" (UID: \"163bb4dc-817f-4696-897a-c1fe4b0f09f8\") " pod="openshift-marketplace/redhat-marketplace-ssbxj" Jan 26 14:54:33 crc kubenswrapper[4844]: I0126 14:54:33.123077 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163bb4dc-817f-4696-897a-c1fe4b0f09f8-catalog-content\") pod \"redhat-marketplace-ssbxj\" (UID: \"163bb4dc-817f-4696-897a-c1fe4b0f09f8\") " pod="openshift-marketplace/redhat-marketplace-ssbxj" Jan 26 14:54:33 crc kubenswrapper[4844]: I0126 14:54:33.124167 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163bb4dc-817f-4696-897a-c1fe4b0f09f8-utilities\") pod \"redhat-marketplace-ssbxj\" (UID: \"163bb4dc-817f-4696-897a-c1fe4b0f09f8\") " pod="openshift-marketplace/redhat-marketplace-ssbxj" Jan 26 14:54:33 crc kubenswrapper[4844]: I0126 14:54:33.124290 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163bb4dc-817f-4696-897a-c1fe4b0f09f8-catalog-content\") pod \"redhat-marketplace-ssbxj\" (UID: \"163bb4dc-817f-4696-897a-c1fe4b0f09f8\") " pod="openshift-marketplace/redhat-marketplace-ssbxj" Jan 26 14:54:33 crc kubenswrapper[4844]: I0126 14:54:33.168230 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tfnv\" (UniqueName: \"kubernetes.io/projected/163bb4dc-817f-4696-897a-c1fe4b0f09f8-kube-api-access-5tfnv\") pod \"redhat-marketplace-ssbxj\" (UID: \"163bb4dc-817f-4696-897a-c1fe4b0f09f8\") " pod="openshift-marketplace/redhat-marketplace-ssbxj" Jan 26 14:54:33 crc kubenswrapper[4844]: I0126 14:54:33.213522 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ssbxj" Jan 26 14:54:33 crc kubenswrapper[4844]: I0126 14:54:33.695004 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ssbxj"] Jan 26 14:54:34 crc kubenswrapper[4844]: I0126 14:54:34.352760 4844 generic.go:334] "Generic (PLEG): container finished" podID="163bb4dc-817f-4696-897a-c1fe4b0f09f8" containerID="0f7b6c0297d6d74b9e204bc55c31dba6e9f372c2204952a27c0a6383aa03bac3" exitCode=0 Jan 26 14:54:34 crc kubenswrapper[4844]: I0126 14:54:34.353014 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ssbxj" event={"ID":"163bb4dc-817f-4696-897a-c1fe4b0f09f8","Type":"ContainerDied","Data":"0f7b6c0297d6d74b9e204bc55c31dba6e9f372c2204952a27c0a6383aa03bac3"} Jan 26 14:54:34 crc kubenswrapper[4844]: I0126 14:54:34.353040 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ssbxj" event={"ID":"163bb4dc-817f-4696-897a-c1fe4b0f09f8","Type":"ContainerStarted","Data":"5ccff166ac492d81816a63637c73b820d96923d439d39c3b36e9c4eafc3aa9cb"} Jan 26 14:54:34 crc kubenswrapper[4844]: I0126 14:54:34.354811 4844 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:54:35 crc kubenswrapper[4844]: I0126 14:54:35.314737 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:54:35 crc kubenswrapper[4844]: E0126 14:54:35.315443 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:54:36 crc kubenswrapper[4844]: I0126 14:54:36.375524 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ssbxj" event={"ID":"163bb4dc-817f-4696-897a-c1fe4b0f09f8","Type":"ContainerStarted","Data":"299798774cc0420f771acb9f8dd0474d368eb9ba15333f4eb0ce8b2c4bfea141"} Jan 26 14:54:37 crc kubenswrapper[4844]: I0126 14:54:37.388702 4844 generic.go:334] "Generic (PLEG): container finished" podID="163bb4dc-817f-4696-897a-c1fe4b0f09f8" containerID="299798774cc0420f771acb9f8dd0474d368eb9ba15333f4eb0ce8b2c4bfea141" exitCode=0 Jan 26 14:54:37 crc kubenswrapper[4844]: I0126 14:54:37.388897 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ssbxj" event={"ID":"163bb4dc-817f-4696-897a-c1fe4b0f09f8","Type":"ContainerDied","Data":"299798774cc0420f771acb9f8dd0474d368eb9ba15333f4eb0ce8b2c4bfea141"} Jan 26 14:54:41 crc kubenswrapper[4844]: I0126 14:54:41.436062 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ssbxj" event={"ID":"163bb4dc-817f-4696-897a-c1fe4b0f09f8","Type":"ContainerStarted","Data":"e86701c5506fc4eb3fa2b24e5ecaf02e8a09e154f13d385b38527106604ffb43"} Jan 26 14:54:41 crc kubenswrapper[4844]: I0126 14:54:41.456275 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ssbxj" podStartSLOduration=3.351302553 podStartE2EDuration="9.456259896s" podCreationTimestamp="2026-01-26 14:54:32 +0000 UTC" firstStartedPulling="2026-01-26 14:54:34.354605999 +0000 UTC m=+7851.287973611" lastFinishedPulling="2026-01-26 14:54:40.459563342 +0000 UTC m=+7857.392930954" observedRunningTime="2026-01-26 14:54:41.455325693 +0000 UTC m=+7858.388693325" watchObservedRunningTime="2026-01-26 14:54:41.456259896 +0000 UTC m=+7858.389627508" Jan 26 14:54:43 crc kubenswrapper[4844]: I0126 14:54:43.213771 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ssbxj" Jan 26 14:54:43 crc kubenswrapper[4844]: I0126 14:54:43.213852 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ssbxj" Jan 26 14:54:43 crc kubenswrapper[4844]: I0126 14:54:43.261262 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ssbxj" Jan 26 14:54:47 crc kubenswrapper[4844]: I0126 14:54:47.319509 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:54:47 crc kubenswrapper[4844]: E0126 14:54:47.320248 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:54:49 crc kubenswrapper[4844]: I0126 14:54:49.752996 4844 scope.go:117] "RemoveContainer" containerID="1c46dce0d62b47ee86c1281c9cbeb42a9bd73769c80464e27085385126b99a2a" Jan 26 14:54:49 crc kubenswrapper[4844]: I0126 14:54:49.791647 4844 scope.go:117] "RemoveContainer" containerID="8b43d96aeab452f3a8c0474f5c46d6de19ddf6bffc927f1a3b8b723c0a05d179" Jan 26 14:54:49 crc kubenswrapper[4844]: I0126 14:54:49.821224 4844 scope.go:117] "RemoveContainer" containerID="9acc60793ed7be2ed9c963d3f788b5a4394a56804ec69f7efc4b3f93b80ffdee" Jan 26 14:54:53 crc kubenswrapper[4844]: I0126 14:54:53.263132 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ssbxj" Jan 26 14:54:53 crc kubenswrapper[4844]: I0126 14:54:53.326858 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ssbxj"] Jan 26 14:54:53 crc kubenswrapper[4844]: I0126 14:54:53.575084 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ssbxj" podUID="163bb4dc-817f-4696-897a-c1fe4b0f09f8" containerName="registry-server" containerID="cri-o://e86701c5506fc4eb3fa2b24e5ecaf02e8a09e154f13d385b38527106604ffb43" gracePeriod=2 Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.013085 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ssbxj" Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.140215 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tfnv\" (UniqueName: \"kubernetes.io/projected/163bb4dc-817f-4696-897a-c1fe4b0f09f8-kube-api-access-5tfnv\") pod \"163bb4dc-817f-4696-897a-c1fe4b0f09f8\" (UID: \"163bb4dc-817f-4696-897a-c1fe4b0f09f8\") " Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.140340 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163bb4dc-817f-4696-897a-c1fe4b0f09f8-catalog-content\") pod \"163bb4dc-817f-4696-897a-c1fe4b0f09f8\" (UID: \"163bb4dc-817f-4696-897a-c1fe4b0f09f8\") " Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.140457 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163bb4dc-817f-4696-897a-c1fe4b0f09f8-utilities\") pod \"163bb4dc-817f-4696-897a-c1fe4b0f09f8\" (UID: \"163bb4dc-817f-4696-897a-c1fe4b0f09f8\") " Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.141257 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/163bb4dc-817f-4696-897a-c1fe4b0f09f8-utilities" (OuterVolumeSpecName: "utilities") pod "163bb4dc-817f-4696-897a-c1fe4b0f09f8" (UID: "163bb4dc-817f-4696-897a-c1fe4b0f09f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.146299 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/163bb4dc-817f-4696-897a-c1fe4b0f09f8-kube-api-access-5tfnv" (OuterVolumeSpecName: "kube-api-access-5tfnv") pod "163bb4dc-817f-4696-897a-c1fe4b0f09f8" (UID: "163bb4dc-817f-4696-897a-c1fe4b0f09f8"). InnerVolumeSpecName "kube-api-access-5tfnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.163498 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/163bb4dc-817f-4696-897a-c1fe4b0f09f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "163bb4dc-817f-4696-897a-c1fe4b0f09f8" (UID: "163bb4dc-817f-4696-897a-c1fe4b0f09f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.243131 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tfnv\" (UniqueName: \"kubernetes.io/projected/163bb4dc-817f-4696-897a-c1fe4b0f09f8-kube-api-access-5tfnv\") on node \"crc\" DevicePath \"\"" Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.243167 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163bb4dc-817f-4696-897a-c1fe4b0f09f8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.243176 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163bb4dc-817f-4696-897a-c1fe4b0f09f8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.585224 4844 generic.go:334] "Generic (PLEG): container finished" podID="163bb4dc-817f-4696-897a-c1fe4b0f09f8" containerID="e86701c5506fc4eb3fa2b24e5ecaf02e8a09e154f13d385b38527106604ffb43" exitCode=0 Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.585263 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ssbxj" event={"ID":"163bb4dc-817f-4696-897a-c1fe4b0f09f8","Type":"ContainerDied","Data":"e86701c5506fc4eb3fa2b24e5ecaf02e8a09e154f13d385b38527106604ffb43"} Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.585282 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ssbxj" Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.585299 4844 scope.go:117] "RemoveContainer" containerID="e86701c5506fc4eb3fa2b24e5ecaf02e8a09e154f13d385b38527106604ffb43" Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.585288 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ssbxj" event={"ID":"163bb4dc-817f-4696-897a-c1fe4b0f09f8","Type":"ContainerDied","Data":"5ccff166ac492d81816a63637c73b820d96923d439d39c3b36e9c4eafc3aa9cb"} Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.619760 4844 scope.go:117] "RemoveContainer" containerID="299798774cc0420f771acb9f8dd0474d368eb9ba15333f4eb0ce8b2c4bfea141" Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.639181 4844 scope.go:117] "RemoveContainer" containerID="0f7b6c0297d6d74b9e204bc55c31dba6e9f372c2204952a27c0a6383aa03bac3" Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.651979 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ssbxj"] Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.659878 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ssbxj"] Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.701459 4844 scope.go:117] "RemoveContainer" containerID="e86701c5506fc4eb3fa2b24e5ecaf02e8a09e154f13d385b38527106604ffb43" Jan 26 14:54:54 crc kubenswrapper[4844]: E0126 14:54:54.702350 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e86701c5506fc4eb3fa2b24e5ecaf02e8a09e154f13d385b38527106604ffb43\": container with ID starting with e86701c5506fc4eb3fa2b24e5ecaf02e8a09e154f13d385b38527106604ffb43 not found: ID does not exist" containerID="e86701c5506fc4eb3fa2b24e5ecaf02e8a09e154f13d385b38527106604ffb43" Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.702376 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e86701c5506fc4eb3fa2b24e5ecaf02e8a09e154f13d385b38527106604ffb43"} err="failed to get container status \"e86701c5506fc4eb3fa2b24e5ecaf02e8a09e154f13d385b38527106604ffb43\": rpc error: code = NotFound desc = could not find container \"e86701c5506fc4eb3fa2b24e5ecaf02e8a09e154f13d385b38527106604ffb43\": container with ID starting with e86701c5506fc4eb3fa2b24e5ecaf02e8a09e154f13d385b38527106604ffb43 not found: ID does not exist" Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.702401 4844 scope.go:117] "RemoveContainer" containerID="299798774cc0420f771acb9f8dd0474d368eb9ba15333f4eb0ce8b2c4bfea141" Jan 26 14:54:54 crc kubenswrapper[4844]: E0126 14:54:54.702685 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"299798774cc0420f771acb9f8dd0474d368eb9ba15333f4eb0ce8b2c4bfea141\": container with ID starting with 299798774cc0420f771acb9f8dd0474d368eb9ba15333f4eb0ce8b2c4bfea141 not found: ID does not exist" containerID="299798774cc0420f771acb9f8dd0474d368eb9ba15333f4eb0ce8b2c4bfea141" Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.702722 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"299798774cc0420f771acb9f8dd0474d368eb9ba15333f4eb0ce8b2c4bfea141"} err="failed to get container status \"299798774cc0420f771acb9f8dd0474d368eb9ba15333f4eb0ce8b2c4bfea141\": rpc error: code = NotFound desc = could not find container \"299798774cc0420f771acb9f8dd0474d368eb9ba15333f4eb0ce8b2c4bfea141\": container with ID starting with 299798774cc0420f771acb9f8dd0474d368eb9ba15333f4eb0ce8b2c4bfea141 not found: ID does not exist" Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.702740 4844 scope.go:117] "RemoveContainer" containerID="0f7b6c0297d6d74b9e204bc55c31dba6e9f372c2204952a27c0a6383aa03bac3" Jan 26 14:54:54 crc kubenswrapper[4844]: E0126 14:54:54.702968 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f7b6c0297d6d74b9e204bc55c31dba6e9f372c2204952a27c0a6383aa03bac3\": container with ID starting with 0f7b6c0297d6d74b9e204bc55c31dba6e9f372c2204952a27c0a6383aa03bac3 not found: ID does not exist" containerID="0f7b6c0297d6d74b9e204bc55c31dba6e9f372c2204952a27c0a6383aa03bac3" Jan 26 14:54:54 crc kubenswrapper[4844]: I0126 14:54:54.702988 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f7b6c0297d6d74b9e204bc55c31dba6e9f372c2204952a27c0a6383aa03bac3"} err="failed to get container status \"0f7b6c0297d6d74b9e204bc55c31dba6e9f372c2204952a27c0a6383aa03bac3\": rpc error: code = NotFound desc = could not find container \"0f7b6c0297d6d74b9e204bc55c31dba6e9f372c2204952a27c0a6383aa03bac3\": container with ID starting with 0f7b6c0297d6d74b9e204bc55c31dba6e9f372c2204952a27c0a6383aa03bac3 not found: ID does not exist" Jan 26 14:54:55 crc kubenswrapper[4844]: I0126 14:54:55.326349 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="163bb4dc-817f-4696-897a-c1fe4b0f09f8" path="/var/lib/kubelet/pods/163bb4dc-817f-4696-897a-c1fe4b0f09f8/volumes" Jan 26 14:54:58 crc kubenswrapper[4844]: I0126 14:54:58.313258 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:54:58 crc kubenswrapper[4844]: E0126 14:54:58.314122 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:55:12 crc kubenswrapper[4844]: I0126 14:55:12.313515 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:55:12 crc kubenswrapper[4844]: E0126 14:55:12.314305 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:55:26 crc kubenswrapper[4844]: I0126 14:55:26.313275 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:55:26 crc kubenswrapper[4844]: E0126 14:55:26.314249 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:55:38 crc kubenswrapper[4844]: I0126 14:55:38.314359 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:55:38 crc kubenswrapper[4844]: E0126 14:55:38.315017 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:55:49 crc kubenswrapper[4844]: I0126 14:55:49.873180 4844 scope.go:117] "RemoveContainer" containerID="c16484c2a2d73b25ccbd0d0357d0b8e39f55b6daddea97443aa2f8c6ede64f97" Jan 26 14:55:50 crc kubenswrapper[4844]: I0126 14:55:50.314524 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:55:50 crc kubenswrapper[4844]: E0126 14:55:50.315575 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:55:51 crc kubenswrapper[4844]: I0126 14:55:51.204687 4844 generic.go:334] "Generic (PLEG): container finished" podID="4c5fbe1a-040b-44a2-8468-00f0a257c5cd" containerID="1aa93d32dba036a28b469ff493e141b09929990cc59a8f50444417a004223539" exitCode=0 Jan 26 14:55:51 crc kubenswrapper[4844]: I0126 14:55:51.204763 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tcmth/must-gather-jcdp7" event={"ID":"4c5fbe1a-040b-44a2-8468-00f0a257c5cd","Type":"ContainerDied","Data":"1aa93d32dba036a28b469ff493e141b09929990cc59a8f50444417a004223539"} Jan 26 14:55:51 crc kubenswrapper[4844]: I0126 14:55:51.205644 4844 scope.go:117] "RemoveContainer" containerID="1aa93d32dba036a28b469ff493e141b09929990cc59a8f50444417a004223539" Jan 26 14:55:51 crc kubenswrapper[4844]: I0126 14:55:51.302810 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tcmth_must-gather-jcdp7_4c5fbe1a-040b-44a2-8468-00f0a257c5cd/gather/0.log" Jan 26 14:56:02 crc kubenswrapper[4844]: I0126 14:56:02.313495 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:56:02 crc kubenswrapper[4844]: E0126 14:56:02.314468 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:56:04 crc kubenswrapper[4844]: I0126 14:56:04.606366 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tcmth/must-gather-jcdp7"] Jan 26 14:56:04 crc kubenswrapper[4844]: I0126 14:56:04.607794 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tcmth/must-gather-jcdp7" podUID="4c5fbe1a-040b-44a2-8468-00f0a257c5cd" containerName="copy" containerID="cri-o://d60955e546efc5120e92a9ef2b72f58c7dda0308de6ac30cd1a58707b9ae1a0d" gracePeriod=2 Jan 26 14:56:04 crc kubenswrapper[4844]: I0126 14:56:04.616842 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tcmth/must-gather-jcdp7"] Jan 26 14:56:05 crc kubenswrapper[4844]: I0126 14:56:05.102383 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tcmth_must-gather-jcdp7_4c5fbe1a-040b-44a2-8468-00f0a257c5cd/copy/0.log" Jan 26 14:56:05 crc kubenswrapper[4844]: I0126 14:56:05.103187 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tcmth/must-gather-jcdp7" Jan 26 14:56:05 crc kubenswrapper[4844]: I0126 14:56:05.175481 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz8q8\" (UniqueName: \"kubernetes.io/projected/4c5fbe1a-040b-44a2-8468-00f0a257c5cd-kube-api-access-gz8q8\") pod \"4c5fbe1a-040b-44a2-8468-00f0a257c5cd\" (UID: \"4c5fbe1a-040b-44a2-8468-00f0a257c5cd\") " Jan 26 14:56:05 crc kubenswrapper[4844]: I0126 14:56:05.175702 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4c5fbe1a-040b-44a2-8468-00f0a257c5cd-must-gather-output\") pod \"4c5fbe1a-040b-44a2-8468-00f0a257c5cd\" (UID: \"4c5fbe1a-040b-44a2-8468-00f0a257c5cd\") " Jan 26 14:56:05 crc kubenswrapper[4844]: I0126 14:56:05.181216 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c5fbe1a-040b-44a2-8468-00f0a257c5cd-kube-api-access-gz8q8" (OuterVolumeSpecName: "kube-api-access-gz8q8") pod "4c5fbe1a-040b-44a2-8468-00f0a257c5cd" (UID: "4c5fbe1a-040b-44a2-8468-00f0a257c5cd"). InnerVolumeSpecName "kube-api-access-gz8q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:56:05 crc kubenswrapper[4844]: I0126 14:56:05.278086 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz8q8\" (UniqueName: \"kubernetes.io/projected/4c5fbe1a-040b-44a2-8468-00f0a257c5cd-kube-api-access-gz8q8\") on node \"crc\" DevicePath \"\"" Jan 26 14:56:05 crc kubenswrapper[4844]: I0126 14:56:05.356278 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c5fbe1a-040b-44a2-8468-00f0a257c5cd-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4c5fbe1a-040b-44a2-8468-00f0a257c5cd" (UID: "4c5fbe1a-040b-44a2-8468-00f0a257c5cd"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:56:05 crc kubenswrapper[4844]: I0126 14:56:05.358283 4844 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tcmth_must-gather-jcdp7_4c5fbe1a-040b-44a2-8468-00f0a257c5cd/copy/0.log" Jan 26 14:56:05 crc kubenswrapper[4844]: I0126 14:56:05.358711 4844 generic.go:334] "Generic (PLEG): container finished" podID="4c5fbe1a-040b-44a2-8468-00f0a257c5cd" containerID="d60955e546efc5120e92a9ef2b72f58c7dda0308de6ac30cd1a58707b9ae1a0d" exitCode=143 Jan 26 14:56:05 crc kubenswrapper[4844]: I0126 14:56:05.358764 4844 scope.go:117] "RemoveContainer" containerID="d60955e546efc5120e92a9ef2b72f58c7dda0308de6ac30cd1a58707b9ae1a0d" Jan 26 14:56:05 crc kubenswrapper[4844]: I0126 14:56:05.358765 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tcmth/must-gather-jcdp7" Jan 26 14:56:05 crc kubenswrapper[4844]: I0126 14:56:05.380188 4844 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4c5fbe1a-040b-44a2-8468-00f0a257c5cd-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 14:56:05 crc kubenswrapper[4844]: I0126 14:56:05.387104 4844 scope.go:117] "RemoveContainer" containerID="1aa93d32dba036a28b469ff493e141b09929990cc59a8f50444417a004223539" Jan 26 14:56:05 crc kubenswrapper[4844]: I0126 14:56:05.484214 4844 scope.go:117] "RemoveContainer" containerID="d60955e546efc5120e92a9ef2b72f58c7dda0308de6ac30cd1a58707b9ae1a0d" Jan 26 14:56:05 crc kubenswrapper[4844]: E0126 14:56:05.484727 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d60955e546efc5120e92a9ef2b72f58c7dda0308de6ac30cd1a58707b9ae1a0d\": container with ID starting with d60955e546efc5120e92a9ef2b72f58c7dda0308de6ac30cd1a58707b9ae1a0d not found: ID does not exist" containerID="d60955e546efc5120e92a9ef2b72f58c7dda0308de6ac30cd1a58707b9ae1a0d" Jan 26 14:56:05 crc kubenswrapper[4844]: I0126 14:56:05.484763 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d60955e546efc5120e92a9ef2b72f58c7dda0308de6ac30cd1a58707b9ae1a0d"} err="failed to get container status \"d60955e546efc5120e92a9ef2b72f58c7dda0308de6ac30cd1a58707b9ae1a0d\": rpc error: code = NotFound desc = could not find container \"d60955e546efc5120e92a9ef2b72f58c7dda0308de6ac30cd1a58707b9ae1a0d\": container with ID starting with d60955e546efc5120e92a9ef2b72f58c7dda0308de6ac30cd1a58707b9ae1a0d not found: ID does not exist" Jan 26 14:56:05 crc kubenswrapper[4844]: I0126 14:56:05.484786 4844 scope.go:117] "RemoveContainer" containerID="1aa93d32dba036a28b469ff493e141b09929990cc59a8f50444417a004223539" Jan 26 14:56:05 crc kubenswrapper[4844]: E0126 14:56:05.485100 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aa93d32dba036a28b469ff493e141b09929990cc59a8f50444417a004223539\": container with ID starting with 1aa93d32dba036a28b469ff493e141b09929990cc59a8f50444417a004223539 not found: ID does not exist" containerID="1aa93d32dba036a28b469ff493e141b09929990cc59a8f50444417a004223539" Jan 26 14:56:05 crc kubenswrapper[4844]: I0126 14:56:05.485131 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa93d32dba036a28b469ff493e141b09929990cc59a8f50444417a004223539"} err="failed to get container status \"1aa93d32dba036a28b469ff493e141b09929990cc59a8f50444417a004223539\": rpc error: code = NotFound desc = could not find container \"1aa93d32dba036a28b469ff493e141b09929990cc59a8f50444417a004223539\": container with ID starting with 1aa93d32dba036a28b469ff493e141b09929990cc59a8f50444417a004223539 not found: ID does not exist" Jan 26 14:56:07 crc kubenswrapper[4844]: I0126 14:56:07.335281 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c5fbe1a-040b-44a2-8468-00f0a257c5cd" path="/var/lib/kubelet/pods/4c5fbe1a-040b-44a2-8468-00f0a257c5cd/volumes" Jan 26 14:56:13 crc kubenswrapper[4844]: I0126 14:56:13.328318 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:56:13 crc kubenswrapper[4844]: E0126 14:56:13.333576 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:56:26 crc kubenswrapper[4844]: I0126 14:56:26.314344 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:56:26 crc kubenswrapper[4844]: E0126 14:56:26.317452 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:56:37 crc kubenswrapper[4844]: I0126 14:56:37.314170 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:56:37 crc kubenswrapper[4844]: E0126 14:56:37.315005 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:56:48 crc kubenswrapper[4844]: I0126 14:56:48.314333 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:56:48 crc kubenswrapper[4844]: E0126 14:56:48.315656 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:56:49 crc kubenswrapper[4844]: I0126 14:56:49.988990 4844 scope.go:117] "RemoveContainer" containerID="ac9189989649c27db698fb40330a6308e1590af7d181a0531b0c77aeaef25f1d" Jan 26 14:57:00 crc kubenswrapper[4844]: I0126 14:57:00.314422 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:57:00 crc kubenswrapper[4844]: E0126 14:57:00.315184 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.687036 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zw2ms"] Jan 26 14:57:10 crc kubenswrapper[4844]: E0126 14:57:10.688096 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163bb4dc-817f-4696-897a-c1fe4b0f09f8" containerName="extract-content" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.688111 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="163bb4dc-817f-4696-897a-c1fe4b0f09f8" containerName="extract-content" Jan 26 14:57:10 crc kubenswrapper[4844]: E0126 14:57:10.688137 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163bb4dc-817f-4696-897a-c1fe4b0f09f8" containerName="registry-server" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.688142 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="163bb4dc-817f-4696-897a-c1fe4b0f09f8" containerName="registry-server" Jan 26 14:57:10 crc kubenswrapper[4844]: E0126 14:57:10.688153 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163bb4dc-817f-4696-897a-c1fe4b0f09f8" containerName="extract-utilities" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.688159 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="163bb4dc-817f-4696-897a-c1fe4b0f09f8" containerName="extract-utilities" Jan 26 14:57:10 crc kubenswrapper[4844]: E0126 14:57:10.688187 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c5fbe1a-040b-44a2-8468-00f0a257c5cd" containerName="gather" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.688193 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5fbe1a-040b-44a2-8468-00f0a257c5cd" containerName="gather" Jan 26 14:57:10 crc kubenswrapper[4844]: E0126 14:57:10.688204 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c5fbe1a-040b-44a2-8468-00f0a257c5cd" containerName="copy" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.688210 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5fbe1a-040b-44a2-8468-00f0a257c5cd" containerName="copy" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.688428 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c5fbe1a-040b-44a2-8468-00f0a257c5cd" containerName="gather" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.688444 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c5fbe1a-040b-44a2-8468-00f0a257c5cd" containerName="copy" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.688462 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="163bb4dc-817f-4696-897a-c1fe4b0f09f8" containerName="registry-server" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.690047 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw2ms" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.708396 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zw2ms"] Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.791933 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761aab7f-549e-467c-be5d-28d4707f8146-catalog-content\") pod \"community-operators-zw2ms\" (UID: \"761aab7f-549e-467c-be5d-28d4707f8146\") " pod="openshift-marketplace/community-operators-zw2ms" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.792199 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761aab7f-549e-467c-be5d-28d4707f8146-utilities\") pod \"community-operators-zw2ms\" (UID: \"761aab7f-549e-467c-be5d-28d4707f8146\") " pod="openshift-marketplace/community-operators-zw2ms" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.792323 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49454\" (UniqueName: \"kubernetes.io/projected/761aab7f-549e-467c-be5d-28d4707f8146-kube-api-access-49454\") pod \"community-operators-zw2ms\" (UID: \"761aab7f-549e-467c-be5d-28d4707f8146\") " pod="openshift-marketplace/community-operators-zw2ms" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.894306 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761aab7f-549e-467c-be5d-28d4707f8146-catalog-content\") pod \"community-operators-zw2ms\" (UID: \"761aab7f-549e-467c-be5d-28d4707f8146\") " pod="openshift-marketplace/community-operators-zw2ms" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.894464 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761aab7f-549e-467c-be5d-28d4707f8146-utilities\") pod \"community-operators-zw2ms\" (UID: \"761aab7f-549e-467c-be5d-28d4707f8146\") " pod="openshift-marketplace/community-operators-zw2ms" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.894531 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49454\" (UniqueName: \"kubernetes.io/projected/761aab7f-549e-467c-be5d-28d4707f8146-kube-api-access-49454\") pod \"community-operators-zw2ms\" (UID: \"761aab7f-549e-467c-be5d-28d4707f8146\") " pod="openshift-marketplace/community-operators-zw2ms" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.894901 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761aab7f-549e-467c-be5d-28d4707f8146-catalog-content\") pod \"community-operators-zw2ms\" (UID: \"761aab7f-549e-467c-be5d-28d4707f8146\") " pod="openshift-marketplace/community-operators-zw2ms" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.895108 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761aab7f-549e-467c-be5d-28d4707f8146-utilities\") pod \"community-operators-zw2ms\" (UID: \"761aab7f-549e-467c-be5d-28d4707f8146\") " pod="openshift-marketplace/community-operators-zw2ms" Jan 26 14:57:10 crc kubenswrapper[4844]: I0126 14:57:10.921491 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49454\" (UniqueName: \"kubernetes.io/projected/761aab7f-549e-467c-be5d-28d4707f8146-kube-api-access-49454\") pod \"community-operators-zw2ms\" (UID: \"761aab7f-549e-467c-be5d-28d4707f8146\") " pod="openshift-marketplace/community-operators-zw2ms" Jan 26 14:57:11 crc kubenswrapper[4844]: I0126 14:57:11.016277 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw2ms" Jan 26 14:57:11 crc kubenswrapper[4844]: I0126 14:57:11.455382 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zw2ms"] Jan 26 14:57:12 crc kubenswrapper[4844]: I0126 14:57:12.096120 4844 generic.go:334] "Generic (PLEG): container finished" podID="761aab7f-549e-467c-be5d-28d4707f8146" containerID="55e2aa316bf0851a535aef3477961a22dac5260bae10a7d8e7af0a61496845df" exitCode=0 Jan 26 14:57:12 crc kubenswrapper[4844]: I0126 14:57:12.096230 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw2ms" event={"ID":"761aab7f-549e-467c-be5d-28d4707f8146","Type":"ContainerDied","Data":"55e2aa316bf0851a535aef3477961a22dac5260bae10a7d8e7af0a61496845df"} Jan 26 14:57:12 crc kubenswrapper[4844]: I0126 14:57:12.096580 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw2ms" event={"ID":"761aab7f-549e-467c-be5d-28d4707f8146","Type":"ContainerStarted","Data":"d6cc9939db1920ad88012d29c5624eb90f644643c0c57afe8d6234e3eaaf7fbb"} Jan 26 14:57:14 crc kubenswrapper[4844]: I0126 14:57:14.118726 4844 generic.go:334] "Generic (PLEG): container finished" podID="761aab7f-549e-467c-be5d-28d4707f8146" containerID="b5e87927d2c2f0b6e4346255375a7a634b9b95cee8f4014875f0ebda90e1d617" exitCode=0 Jan 26 14:57:14 crc kubenswrapper[4844]: I0126 14:57:14.118822 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw2ms" event={"ID":"761aab7f-549e-467c-be5d-28d4707f8146","Type":"ContainerDied","Data":"b5e87927d2c2f0b6e4346255375a7a634b9b95cee8f4014875f0ebda90e1d617"} Jan 26 14:57:14 crc kubenswrapper[4844]: I0126 14:57:14.313943 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:57:14 crc kubenswrapper[4844]: E0126 14:57:14.314724 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:57:14 crc kubenswrapper[4844]: I0126 14:57:14.572737 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l5qbm"] Jan 26 14:57:14 crc kubenswrapper[4844]: I0126 14:57:14.576407 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l5qbm" Jan 26 14:57:14 crc kubenswrapper[4844]: I0126 14:57:14.591466 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l5qbm"] Jan 26 14:57:14 crc kubenswrapper[4844]: I0126 14:57:14.680944 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec83fc21-f976-4b12-bbde-779ffe7071fd-catalog-content\") pod \"certified-operators-l5qbm\" (UID: \"ec83fc21-f976-4b12-bbde-779ffe7071fd\") " pod="openshift-marketplace/certified-operators-l5qbm" Jan 26 14:57:14 crc kubenswrapper[4844]: I0126 14:57:14.681280 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec83fc21-f976-4b12-bbde-779ffe7071fd-utilities\") pod \"certified-operators-l5qbm\" (UID: \"ec83fc21-f976-4b12-bbde-779ffe7071fd\") " pod="openshift-marketplace/certified-operators-l5qbm" Jan 26 14:57:14 crc kubenswrapper[4844]: I0126 14:57:14.681441 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28kpn\" (UniqueName: \"kubernetes.io/projected/ec83fc21-f976-4b12-bbde-779ffe7071fd-kube-api-access-28kpn\") pod \"certified-operators-l5qbm\" (UID: \"ec83fc21-f976-4b12-bbde-779ffe7071fd\") " pod="openshift-marketplace/certified-operators-l5qbm" Jan 26 14:57:14 crc kubenswrapper[4844]: I0126 14:57:14.783779 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec83fc21-f976-4b12-bbde-779ffe7071fd-utilities\") pod \"certified-operators-l5qbm\" (UID: \"ec83fc21-f976-4b12-bbde-779ffe7071fd\") " pod="openshift-marketplace/certified-operators-l5qbm" Jan 26 14:57:14 crc kubenswrapper[4844]: I0126 14:57:14.783958 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28kpn\" (UniqueName: \"kubernetes.io/projected/ec83fc21-f976-4b12-bbde-779ffe7071fd-kube-api-access-28kpn\") pod \"certified-operators-l5qbm\" (UID: \"ec83fc21-f976-4b12-bbde-779ffe7071fd\") " pod="openshift-marketplace/certified-operators-l5qbm" Jan 26 14:57:14 crc kubenswrapper[4844]: I0126 14:57:14.784047 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec83fc21-f976-4b12-bbde-779ffe7071fd-catalog-content\") pod \"certified-operators-l5qbm\" (UID: \"ec83fc21-f976-4b12-bbde-779ffe7071fd\") " pod="openshift-marketplace/certified-operators-l5qbm" Jan 26 14:57:14 crc kubenswrapper[4844]: I0126 14:57:14.784311 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec83fc21-f976-4b12-bbde-779ffe7071fd-utilities\") pod \"certified-operators-l5qbm\" (UID: \"ec83fc21-f976-4b12-bbde-779ffe7071fd\") " pod="openshift-marketplace/certified-operators-l5qbm" Jan 26 14:57:14 crc kubenswrapper[4844]: I0126 14:57:14.784758 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec83fc21-f976-4b12-bbde-779ffe7071fd-catalog-content\") pod \"certified-operators-l5qbm\" (UID: \"ec83fc21-f976-4b12-bbde-779ffe7071fd\") " pod="openshift-marketplace/certified-operators-l5qbm" Jan 26 14:57:14 crc kubenswrapper[4844]: I0126 14:57:14.813219 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28kpn\" (UniqueName: \"kubernetes.io/projected/ec83fc21-f976-4b12-bbde-779ffe7071fd-kube-api-access-28kpn\") pod \"certified-operators-l5qbm\" (UID: \"ec83fc21-f976-4b12-bbde-779ffe7071fd\") " pod="openshift-marketplace/certified-operators-l5qbm" Jan 26 14:57:14 crc kubenswrapper[4844]: I0126 14:57:14.905968 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l5qbm" Jan 26 14:57:15 crc kubenswrapper[4844]: I0126 14:57:15.148395 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw2ms" event={"ID":"761aab7f-549e-467c-be5d-28d4707f8146","Type":"ContainerStarted","Data":"17cb0faa6996764e97f03360b127e745de5b9822c0137da8481db16cde14a370"} Jan 26 14:57:15 crc kubenswrapper[4844]: I0126 14:57:15.180817 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zw2ms" podStartSLOduration=2.534872998 podStartE2EDuration="5.180795033s" podCreationTimestamp="2026-01-26 14:57:10 +0000 UTC" firstStartedPulling="2026-01-26 14:57:12.098924502 +0000 UTC m=+8009.032292124" lastFinishedPulling="2026-01-26 14:57:14.744846547 +0000 UTC m=+8011.678214159" observedRunningTime="2026-01-26 14:57:15.1728718 +0000 UTC m=+8012.106239402" watchObservedRunningTime="2026-01-26 14:57:15.180795033 +0000 UTC m=+8012.114162645" Jan 26 14:57:15 crc kubenswrapper[4844]: I0126 14:57:15.436382 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l5qbm"] Jan 26 14:57:16 crc kubenswrapper[4844]: I0126 14:57:16.161730 4844 generic.go:334] "Generic (PLEG): container finished" podID="ec83fc21-f976-4b12-bbde-779ffe7071fd" containerID="aa5462899c2954abdab5c9a61d4c5b0346ad8425257105fc923ec3160f452535" exitCode=0 Jan 26 14:57:16 crc kubenswrapper[4844]: I0126 14:57:16.161812 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5qbm" event={"ID":"ec83fc21-f976-4b12-bbde-779ffe7071fd","Type":"ContainerDied","Data":"aa5462899c2954abdab5c9a61d4c5b0346ad8425257105fc923ec3160f452535"} Jan 26 14:57:16 crc kubenswrapper[4844]: I0126 14:57:16.162196 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5qbm" event={"ID":"ec83fc21-f976-4b12-bbde-779ffe7071fd","Type":"ContainerStarted","Data":"6eb27d74841a06ddb97b63fb7d368b4a2fd78df7cb9bc9d96d13a624b618c06d"} Jan 26 14:57:18 crc kubenswrapper[4844]: I0126 14:57:18.186315 4844 generic.go:334] "Generic (PLEG): container finished" podID="ec83fc21-f976-4b12-bbde-779ffe7071fd" containerID="1fdf10602e78477afabdb9619e915f254e9157fcf3b4aadffc0b0ebf0fd458e6" exitCode=0 Jan 26 14:57:18 crc kubenswrapper[4844]: I0126 14:57:18.186357 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5qbm" event={"ID":"ec83fc21-f976-4b12-bbde-779ffe7071fd","Type":"ContainerDied","Data":"1fdf10602e78477afabdb9619e915f254e9157fcf3b4aadffc0b0ebf0fd458e6"} Jan 26 14:57:21 crc kubenswrapper[4844]: I0126 14:57:21.016977 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zw2ms" Jan 26 14:57:21 crc kubenswrapper[4844]: I0126 14:57:21.017627 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zw2ms" Jan 26 14:57:21 crc kubenswrapper[4844]: I0126 14:57:21.081297 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zw2ms" Jan 26 14:57:21 crc kubenswrapper[4844]: I0126 14:57:21.222738 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5qbm" event={"ID":"ec83fc21-f976-4b12-bbde-779ffe7071fd","Type":"ContainerStarted","Data":"c9191f8e01b4542d2f85de6af30086a0b57d61ede58288b9136861519da5de9f"} Jan 26 14:57:21 crc kubenswrapper[4844]: I0126 14:57:21.243213 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l5qbm" podStartSLOduration=2.48417328 podStartE2EDuration="7.243184031s" podCreationTimestamp="2026-01-26 14:57:14 +0000 UTC" firstStartedPulling="2026-01-26 14:57:16.163787833 +0000 UTC m=+8013.097155445" lastFinishedPulling="2026-01-26 14:57:20.922798574 +0000 UTC m=+8017.856166196" observedRunningTime="2026-01-26 14:57:21.241306594 +0000 UTC m=+8018.174674206" watchObservedRunningTime="2026-01-26 14:57:21.243184031 +0000 UTC m=+8018.176551703" Jan 26 14:57:21 crc kubenswrapper[4844]: I0126 14:57:21.277611 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zw2ms" Jan 26 14:57:24 crc kubenswrapper[4844]: I0126 14:57:24.907095 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l5qbm" Jan 26 14:57:24 crc kubenswrapper[4844]: I0126 14:57:24.907586 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l5qbm" Jan 26 14:57:24 crc kubenswrapper[4844]: I0126 14:57:24.958671 4844 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l5qbm" Jan 26 14:57:25 crc kubenswrapper[4844]: I0126 14:57:25.672289 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zw2ms"] Jan 26 14:57:25 crc kubenswrapper[4844]: I0126 14:57:25.672837 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zw2ms" podUID="761aab7f-549e-467c-be5d-28d4707f8146" containerName="registry-server" containerID="cri-o://17cb0faa6996764e97f03360b127e745de5b9822c0137da8481db16cde14a370" gracePeriod=2 Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.198008 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw2ms" Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.257360 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761aab7f-549e-467c-be5d-28d4707f8146-catalog-content\") pod \"761aab7f-549e-467c-be5d-28d4707f8146\" (UID: \"761aab7f-549e-467c-be5d-28d4707f8146\") " Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.257713 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49454\" (UniqueName: \"kubernetes.io/projected/761aab7f-549e-467c-be5d-28d4707f8146-kube-api-access-49454\") pod \"761aab7f-549e-467c-be5d-28d4707f8146\" (UID: \"761aab7f-549e-467c-be5d-28d4707f8146\") " Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.257802 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761aab7f-549e-467c-be5d-28d4707f8146-utilities\") pod \"761aab7f-549e-467c-be5d-28d4707f8146\" (UID: \"761aab7f-549e-467c-be5d-28d4707f8146\") " Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.258706 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/761aab7f-549e-467c-be5d-28d4707f8146-utilities" (OuterVolumeSpecName: "utilities") pod "761aab7f-549e-467c-be5d-28d4707f8146" (UID: "761aab7f-549e-467c-be5d-28d4707f8146"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.267713 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/761aab7f-549e-467c-be5d-28d4707f8146-kube-api-access-49454" (OuterVolumeSpecName: "kube-api-access-49454") pod "761aab7f-549e-467c-be5d-28d4707f8146" (UID: "761aab7f-549e-467c-be5d-28d4707f8146"). InnerVolumeSpecName "kube-api-access-49454". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.275717 4844 generic.go:334] "Generic (PLEG): container finished" podID="761aab7f-549e-467c-be5d-28d4707f8146" containerID="17cb0faa6996764e97f03360b127e745de5b9822c0137da8481db16cde14a370" exitCode=0 Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.275787 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw2ms" Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.275796 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw2ms" event={"ID":"761aab7f-549e-467c-be5d-28d4707f8146","Type":"ContainerDied","Data":"17cb0faa6996764e97f03360b127e745de5b9822c0137da8481db16cde14a370"} Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.275903 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw2ms" event={"ID":"761aab7f-549e-467c-be5d-28d4707f8146","Type":"ContainerDied","Data":"d6cc9939db1920ad88012d29c5624eb90f644643c0c57afe8d6234e3eaaf7fbb"} Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.275922 4844 scope.go:117] "RemoveContainer" containerID="17cb0faa6996764e97f03360b127e745de5b9822c0137da8481db16cde14a370" Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.313100 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/761aab7f-549e-467c-be5d-28d4707f8146-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "761aab7f-549e-467c-be5d-28d4707f8146" (UID: "761aab7f-549e-467c-be5d-28d4707f8146"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.322923 4844 scope.go:117] "RemoveContainer" containerID="b5e87927d2c2f0b6e4346255375a7a634b9b95cee8f4014875f0ebda90e1d617" Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.338790 4844 scope.go:117] "RemoveContainer" containerID="55e2aa316bf0851a535aef3477961a22dac5260bae10a7d8e7af0a61496845df" Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.360148 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761aab7f-549e-467c-be5d-28d4707f8146-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.360945 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49454\" (UniqueName: \"kubernetes.io/projected/761aab7f-549e-467c-be5d-28d4707f8146-kube-api-access-49454\") on node \"crc\" DevicePath \"\"" Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.360964 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761aab7f-549e-467c-be5d-28d4707f8146-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.382382 4844 scope.go:117] "RemoveContainer" containerID="17cb0faa6996764e97f03360b127e745de5b9822c0137da8481db16cde14a370" Jan 26 14:57:26 crc kubenswrapper[4844]: E0126 14:57:26.382798 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17cb0faa6996764e97f03360b127e745de5b9822c0137da8481db16cde14a370\": container with ID starting with 17cb0faa6996764e97f03360b127e745de5b9822c0137da8481db16cde14a370 not found: ID does not exist" containerID="17cb0faa6996764e97f03360b127e745de5b9822c0137da8481db16cde14a370" Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.382843 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17cb0faa6996764e97f03360b127e745de5b9822c0137da8481db16cde14a370"} err="failed to get container status \"17cb0faa6996764e97f03360b127e745de5b9822c0137da8481db16cde14a370\": rpc error: code = NotFound desc = could not find container \"17cb0faa6996764e97f03360b127e745de5b9822c0137da8481db16cde14a370\": container with ID starting with 17cb0faa6996764e97f03360b127e745de5b9822c0137da8481db16cde14a370 not found: ID does not exist" Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.382873 4844 scope.go:117] "RemoveContainer" containerID="b5e87927d2c2f0b6e4346255375a7a634b9b95cee8f4014875f0ebda90e1d617" Jan 26 14:57:26 crc kubenswrapper[4844]: E0126 14:57:26.383179 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5e87927d2c2f0b6e4346255375a7a634b9b95cee8f4014875f0ebda90e1d617\": container with ID starting with b5e87927d2c2f0b6e4346255375a7a634b9b95cee8f4014875f0ebda90e1d617 not found: ID does not exist" containerID="b5e87927d2c2f0b6e4346255375a7a634b9b95cee8f4014875f0ebda90e1d617" Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.383211 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5e87927d2c2f0b6e4346255375a7a634b9b95cee8f4014875f0ebda90e1d617"} err="failed to get container status \"b5e87927d2c2f0b6e4346255375a7a634b9b95cee8f4014875f0ebda90e1d617\": rpc error: code = NotFound desc = could not find container \"b5e87927d2c2f0b6e4346255375a7a634b9b95cee8f4014875f0ebda90e1d617\": container with ID starting with b5e87927d2c2f0b6e4346255375a7a634b9b95cee8f4014875f0ebda90e1d617 not found: ID does not exist" Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.383240 4844 scope.go:117] "RemoveContainer" containerID="55e2aa316bf0851a535aef3477961a22dac5260bae10a7d8e7af0a61496845df" Jan 26 14:57:26 crc kubenswrapper[4844]: E0126 14:57:26.383615 4844 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e2aa316bf0851a535aef3477961a22dac5260bae10a7d8e7af0a61496845df\": container with ID starting with 55e2aa316bf0851a535aef3477961a22dac5260bae10a7d8e7af0a61496845df not found: ID does not exist" containerID="55e2aa316bf0851a535aef3477961a22dac5260bae10a7d8e7af0a61496845df" Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.383644 4844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e2aa316bf0851a535aef3477961a22dac5260bae10a7d8e7af0a61496845df"} err="failed to get container status \"55e2aa316bf0851a535aef3477961a22dac5260bae10a7d8e7af0a61496845df\": rpc error: code = NotFound desc = could not find container \"55e2aa316bf0851a535aef3477961a22dac5260bae10a7d8e7af0a61496845df\": container with ID starting with 55e2aa316bf0851a535aef3477961a22dac5260bae10a7d8e7af0a61496845df not found: ID does not exist" Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.615190 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zw2ms"] Jan 26 14:57:26 crc kubenswrapper[4844]: I0126 14:57:26.623993 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zw2ms"] Jan 26 14:57:27 crc kubenswrapper[4844]: I0126 14:57:27.328513 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="761aab7f-549e-467c-be5d-28d4707f8146" path="/var/lib/kubelet/pods/761aab7f-549e-467c-be5d-28d4707f8146/volumes" Jan 26 14:57:28 crc kubenswrapper[4844]: I0126 14:57:28.313573 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:57:28 crc kubenswrapper[4844]: E0126 14:57:28.314306 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:57:34 crc kubenswrapper[4844]: I0126 14:57:34.963049 4844 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l5qbm" Jan 26 14:57:35 crc kubenswrapper[4844]: I0126 14:57:35.019110 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l5qbm"] Jan 26 14:57:35 crc kubenswrapper[4844]: I0126 14:57:35.371819 4844 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l5qbm" podUID="ec83fc21-f976-4b12-bbde-779ffe7071fd" containerName="registry-server" containerID="cri-o://c9191f8e01b4542d2f85de6af30086a0b57d61ede58288b9136861519da5de9f" gracePeriod=2 Jan 26 14:57:36 crc kubenswrapper[4844]: I0126 14:57:36.390270 4844 generic.go:334] "Generic (PLEG): container finished" podID="ec83fc21-f976-4b12-bbde-779ffe7071fd" containerID="c9191f8e01b4542d2f85de6af30086a0b57d61ede58288b9136861519da5de9f" exitCode=0 Jan 26 14:57:36 crc kubenswrapper[4844]: I0126 14:57:36.390352 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5qbm" event={"ID":"ec83fc21-f976-4b12-bbde-779ffe7071fd","Type":"ContainerDied","Data":"c9191f8e01b4542d2f85de6af30086a0b57d61ede58288b9136861519da5de9f"} Jan 26 14:57:36 crc kubenswrapper[4844]: I0126 14:57:36.950650 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l5qbm" Jan 26 14:57:36 crc kubenswrapper[4844]: I0126 14:57:36.996243 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec83fc21-f976-4b12-bbde-779ffe7071fd-utilities\") pod \"ec83fc21-f976-4b12-bbde-779ffe7071fd\" (UID: \"ec83fc21-f976-4b12-bbde-779ffe7071fd\") " Jan 26 14:57:36 crc kubenswrapper[4844]: I0126 14:57:36.996588 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec83fc21-f976-4b12-bbde-779ffe7071fd-catalog-content\") pod \"ec83fc21-f976-4b12-bbde-779ffe7071fd\" (UID: \"ec83fc21-f976-4b12-bbde-779ffe7071fd\") " Jan 26 14:57:36 crc kubenswrapper[4844]: I0126 14:57:36.996717 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28kpn\" (UniqueName: \"kubernetes.io/projected/ec83fc21-f976-4b12-bbde-779ffe7071fd-kube-api-access-28kpn\") pod \"ec83fc21-f976-4b12-bbde-779ffe7071fd\" (UID: \"ec83fc21-f976-4b12-bbde-779ffe7071fd\") " Jan 26 14:57:37 crc kubenswrapper[4844]: I0126 14:57:37.001330 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec83fc21-f976-4b12-bbde-779ffe7071fd-utilities" (OuterVolumeSpecName: "utilities") pod "ec83fc21-f976-4b12-bbde-779ffe7071fd" (UID: "ec83fc21-f976-4b12-bbde-779ffe7071fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:57:37 crc kubenswrapper[4844]: I0126 14:57:37.003886 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec83fc21-f976-4b12-bbde-779ffe7071fd-kube-api-access-28kpn" (OuterVolumeSpecName: "kube-api-access-28kpn") pod "ec83fc21-f976-4b12-bbde-779ffe7071fd" (UID: "ec83fc21-f976-4b12-bbde-779ffe7071fd"). InnerVolumeSpecName "kube-api-access-28kpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:57:37 crc kubenswrapper[4844]: I0126 14:57:37.068778 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec83fc21-f976-4b12-bbde-779ffe7071fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec83fc21-f976-4b12-bbde-779ffe7071fd" (UID: "ec83fc21-f976-4b12-bbde-779ffe7071fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:57:37 crc kubenswrapper[4844]: I0126 14:57:37.099725 4844 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec83fc21-f976-4b12-bbde-779ffe7071fd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:57:37 crc kubenswrapper[4844]: I0126 14:57:37.099765 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28kpn\" (UniqueName: \"kubernetes.io/projected/ec83fc21-f976-4b12-bbde-779ffe7071fd-kube-api-access-28kpn\") on node \"crc\" DevicePath \"\"" Jan 26 14:57:37 crc kubenswrapper[4844]: I0126 14:57:37.099786 4844 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec83fc21-f976-4b12-bbde-779ffe7071fd-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:57:37 crc kubenswrapper[4844]: I0126 14:57:37.403926 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5qbm" event={"ID":"ec83fc21-f976-4b12-bbde-779ffe7071fd","Type":"ContainerDied","Data":"6eb27d74841a06ddb97b63fb7d368b4a2fd78df7cb9bc9d96d13a624b618c06d"} Jan 26 14:57:37 crc kubenswrapper[4844]: I0126 14:57:37.403995 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l5qbm" Jan 26 14:57:37 crc kubenswrapper[4844]: I0126 14:57:37.404011 4844 scope.go:117] "RemoveContainer" containerID="c9191f8e01b4542d2f85de6af30086a0b57d61ede58288b9136861519da5de9f" Jan 26 14:57:37 crc kubenswrapper[4844]: I0126 14:57:37.438894 4844 scope.go:117] "RemoveContainer" containerID="1fdf10602e78477afabdb9619e915f254e9157fcf3b4aadffc0b0ebf0fd458e6" Jan 26 14:57:37 crc kubenswrapper[4844]: I0126 14:57:37.443134 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l5qbm"] Jan 26 14:57:37 crc kubenswrapper[4844]: I0126 14:57:37.452834 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l5qbm"] Jan 26 14:57:37 crc kubenswrapper[4844]: I0126 14:57:37.466689 4844 scope.go:117] "RemoveContainer" containerID="aa5462899c2954abdab5c9a61d4c5b0346ad8425257105fc923ec3160f452535" Jan 26 14:57:39 crc kubenswrapper[4844]: I0126 14:57:39.324973 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec83fc21-f976-4b12-bbde-779ffe7071fd" path="/var/lib/kubelet/pods/ec83fc21-f976-4b12-bbde-779ffe7071fd/volumes" Jan 26 14:57:43 crc kubenswrapper[4844]: I0126 14:57:43.330632 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:57:43 crc kubenswrapper[4844]: E0126 14:57:43.331548 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:57:57 crc kubenswrapper[4844]: I0126 14:57:57.314190 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:57:57 crc kubenswrapper[4844]: E0126 14:57:57.315371 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:58:09 crc kubenswrapper[4844]: I0126 14:58:09.314492 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:58:09 crc kubenswrapper[4844]: E0126 14:58:09.315594 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:58:21 crc kubenswrapper[4844]: I0126 14:58:21.314252 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:58:21 crc kubenswrapper[4844]: E0126 14:58:21.315375 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:58:32 crc kubenswrapper[4844]: I0126 14:58:32.314076 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:58:32 crc kubenswrapper[4844]: E0126 14:58:32.315654 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:58:44 crc kubenswrapper[4844]: I0126 14:58:44.314167 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:58:44 crc kubenswrapper[4844]: E0126 14:58:44.314943 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:58:59 crc kubenswrapper[4844]: I0126 14:58:59.313622 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:58:59 crc kubenswrapper[4844]: E0126 14:58:59.314588 4844 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j7r9j_openshift-machine-config-operator(e3602fc7-397b-4d73-ab0c-45acc047397b)\"" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" Jan 26 14:59:11 crc kubenswrapper[4844]: I0126 14:59:11.313282 4844 scope.go:117] "RemoveContainer" containerID="11b408fde86b2eff6057acce3a2882ce486e50f73c743f52ee4ee12015da3c9e" Jan 26 14:59:12 crc kubenswrapper[4844]: I0126 14:59:12.478198 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" event={"ID":"e3602fc7-397b-4d73-ab0c-45acc047397b","Type":"ContainerStarted","Data":"3b15fa872b79fad8ac9ccaa383c2445ac8e3af9e8dc6bfce21fac291ca22707b"} Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.183183 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww"] Jan 26 15:00:00 crc kubenswrapper[4844]: E0126 15:00:00.183948 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec83fc21-f976-4b12-bbde-779ffe7071fd" containerName="registry-server" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.183960 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec83fc21-f976-4b12-bbde-779ffe7071fd" containerName="registry-server" Jan 26 15:00:00 crc kubenswrapper[4844]: E0126 15:00:00.183977 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec83fc21-f976-4b12-bbde-779ffe7071fd" containerName="extract-content" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.183983 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec83fc21-f976-4b12-bbde-779ffe7071fd" containerName="extract-content" Jan 26 15:00:00 crc kubenswrapper[4844]: E0126 15:00:00.183997 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec83fc21-f976-4b12-bbde-779ffe7071fd" containerName="extract-utilities" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.184004 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec83fc21-f976-4b12-bbde-779ffe7071fd" containerName="extract-utilities" Jan 26 15:00:00 crc kubenswrapper[4844]: E0126 15:00:00.184024 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761aab7f-549e-467c-be5d-28d4707f8146" containerName="extract-utilities" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.184030 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="761aab7f-549e-467c-be5d-28d4707f8146" containerName="extract-utilities" Jan 26 15:00:00 crc kubenswrapper[4844]: E0126 15:00:00.184043 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761aab7f-549e-467c-be5d-28d4707f8146" containerName="extract-content" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.184049 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="761aab7f-549e-467c-be5d-28d4707f8146" containerName="extract-content" Jan 26 15:00:00 crc kubenswrapper[4844]: E0126 15:00:00.184057 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761aab7f-549e-467c-be5d-28d4707f8146" containerName="registry-server" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.184063 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="761aab7f-549e-467c-be5d-28d4707f8146" containerName="registry-server" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.184240 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec83fc21-f976-4b12-bbde-779ffe7071fd" containerName="registry-server" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.184328 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="761aab7f-549e-467c-be5d-28d4707f8146" containerName="registry-server" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.184996 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.187464 4844 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.187463 4844 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.201650 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww"] Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.224023 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfndl\" (UniqueName: \"kubernetes.io/projected/685ace2b-3309-4802-aa05-e3e34327136d-kube-api-access-rfndl\") pod \"collect-profiles-29490660-fzvww\" (UID: \"685ace2b-3309-4802-aa05-e3e34327136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.224113 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/685ace2b-3309-4802-aa05-e3e34327136d-secret-volume\") pod \"collect-profiles-29490660-fzvww\" (UID: \"685ace2b-3309-4802-aa05-e3e34327136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.224341 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/685ace2b-3309-4802-aa05-e3e34327136d-config-volume\") pod \"collect-profiles-29490660-fzvww\" (UID: \"685ace2b-3309-4802-aa05-e3e34327136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.326132 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfndl\" (UniqueName: \"kubernetes.io/projected/685ace2b-3309-4802-aa05-e3e34327136d-kube-api-access-rfndl\") pod \"collect-profiles-29490660-fzvww\" (UID: \"685ace2b-3309-4802-aa05-e3e34327136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.326179 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/685ace2b-3309-4802-aa05-e3e34327136d-secret-volume\") pod \"collect-profiles-29490660-fzvww\" (UID: \"685ace2b-3309-4802-aa05-e3e34327136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.326369 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/685ace2b-3309-4802-aa05-e3e34327136d-config-volume\") pod \"collect-profiles-29490660-fzvww\" (UID: \"685ace2b-3309-4802-aa05-e3e34327136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.327873 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/685ace2b-3309-4802-aa05-e3e34327136d-config-volume\") pod \"collect-profiles-29490660-fzvww\" (UID: \"685ace2b-3309-4802-aa05-e3e34327136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.334456 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/685ace2b-3309-4802-aa05-e3e34327136d-secret-volume\") pod \"collect-profiles-29490660-fzvww\" (UID: \"685ace2b-3309-4802-aa05-e3e34327136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.344699 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfndl\" (UniqueName: \"kubernetes.io/projected/685ace2b-3309-4802-aa05-e3e34327136d-kube-api-access-rfndl\") pod \"collect-profiles-29490660-fzvww\" (UID: \"685ace2b-3309-4802-aa05-e3e34327136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.519055 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww" Jan 26 15:00:00 crc kubenswrapper[4844]: I0126 15:00:00.993203 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww"] Jan 26 15:00:01 crc kubenswrapper[4844]: I0126 15:00:01.944710 4844 generic.go:334] "Generic (PLEG): container finished" podID="685ace2b-3309-4802-aa05-e3e34327136d" containerID="8a63f4a2c43416318a2c1063f2a3d8e77d6fe2678d7fffeab989ced68c6da173" exitCode=0 Jan 26 15:00:01 crc kubenswrapper[4844]: I0126 15:00:01.944808 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww" event={"ID":"685ace2b-3309-4802-aa05-e3e34327136d","Type":"ContainerDied","Data":"8a63f4a2c43416318a2c1063f2a3d8e77d6fe2678d7fffeab989ced68c6da173"} Jan 26 15:00:01 crc kubenswrapper[4844]: I0126 15:00:01.945017 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww" event={"ID":"685ace2b-3309-4802-aa05-e3e34327136d","Type":"ContainerStarted","Data":"5a673ebfb226970bfeb8924cd93f943ab837d5e6266765db5afc87db8fb656d2"} Jan 26 15:00:03 crc kubenswrapper[4844]: I0126 15:00:03.482533 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww" Jan 26 15:00:03 crc kubenswrapper[4844]: I0126 15:00:03.602370 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/685ace2b-3309-4802-aa05-e3e34327136d-secret-volume\") pod \"685ace2b-3309-4802-aa05-e3e34327136d\" (UID: \"685ace2b-3309-4802-aa05-e3e34327136d\") " Jan 26 15:00:03 crc kubenswrapper[4844]: I0126 15:00:03.602458 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfndl\" (UniqueName: \"kubernetes.io/projected/685ace2b-3309-4802-aa05-e3e34327136d-kube-api-access-rfndl\") pod \"685ace2b-3309-4802-aa05-e3e34327136d\" (UID: \"685ace2b-3309-4802-aa05-e3e34327136d\") " Jan 26 15:00:03 crc kubenswrapper[4844]: I0126 15:00:03.602773 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/685ace2b-3309-4802-aa05-e3e34327136d-config-volume\") pod \"685ace2b-3309-4802-aa05-e3e34327136d\" (UID: \"685ace2b-3309-4802-aa05-e3e34327136d\") " Jan 26 15:00:03 crc kubenswrapper[4844]: I0126 15:00:03.603785 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/685ace2b-3309-4802-aa05-e3e34327136d-config-volume" (OuterVolumeSpecName: "config-volume") pod "685ace2b-3309-4802-aa05-e3e34327136d" (UID: "685ace2b-3309-4802-aa05-e3e34327136d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:00:03 crc kubenswrapper[4844]: I0126 15:00:03.609158 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/685ace2b-3309-4802-aa05-e3e34327136d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "685ace2b-3309-4802-aa05-e3e34327136d" (UID: "685ace2b-3309-4802-aa05-e3e34327136d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:00:03 crc kubenswrapper[4844]: I0126 15:00:03.610761 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/685ace2b-3309-4802-aa05-e3e34327136d-kube-api-access-rfndl" (OuterVolumeSpecName: "kube-api-access-rfndl") pod "685ace2b-3309-4802-aa05-e3e34327136d" (UID: "685ace2b-3309-4802-aa05-e3e34327136d"). InnerVolumeSpecName "kube-api-access-rfndl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:00:03 crc kubenswrapper[4844]: I0126 15:00:03.704707 4844 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/685ace2b-3309-4802-aa05-e3e34327136d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:03 crc kubenswrapper[4844]: I0126 15:00:03.704741 4844 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/685ace2b-3309-4802-aa05-e3e34327136d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:03 crc kubenswrapper[4844]: I0126 15:00:03.704754 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfndl\" (UniqueName: \"kubernetes.io/projected/685ace2b-3309-4802-aa05-e3e34327136d-kube-api-access-rfndl\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:03 crc kubenswrapper[4844]: I0126 15:00:03.964072 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww" event={"ID":"685ace2b-3309-4802-aa05-e3e34327136d","Type":"ContainerDied","Data":"5a673ebfb226970bfeb8924cd93f943ab837d5e6266765db5afc87db8fb656d2"} Jan 26 15:00:03 crc kubenswrapper[4844]: I0126 15:00:03.964126 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a673ebfb226970bfeb8924cd93f943ab837d5e6266765db5afc87db8fb656d2" Jan 26 15:00:03 crc kubenswrapper[4844]: I0126 15:00:03.964125 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-fzvww" Jan 26 15:00:04 crc kubenswrapper[4844]: I0126 15:00:04.571160 4844 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj"] Jan 26 15:00:04 crc kubenswrapper[4844]: I0126 15:00:04.581766 4844 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490615-wrnkj"] Jan 26 15:00:05 crc kubenswrapper[4844]: I0126 15:00:05.329159 4844 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c24428f-8915-4e8d-b054-14f7df0caa5b" path="/var/lib/kubelet/pods/0c24428f-8915-4e8d-b054-14f7df0caa5b/volumes" Jan 26 15:00:50 crc kubenswrapper[4844]: I0126 15:00:50.174265 4844 scope.go:117] "RemoveContainer" containerID="9901ab87665d38878089316bf456d790b44b9234e745133b6096d0af5eacf574" Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.183701 4844 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490661-m8cw8"] Jan 26 15:01:00 crc kubenswrapper[4844]: E0126 15:01:00.184933 4844 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="685ace2b-3309-4802-aa05-e3e34327136d" containerName="collect-profiles" Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.184958 4844 state_mem.go:107] "Deleted CPUSet assignment" podUID="685ace2b-3309-4802-aa05-e3e34327136d" containerName="collect-profiles" Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.185241 4844 memory_manager.go:354] "RemoveStaleState removing state" podUID="685ace2b-3309-4802-aa05-e3e34327136d" containerName="collect-profiles" Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.186344 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490661-m8cw8" Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.199332 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490661-m8cw8"] Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.318298 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6n5b\" (UniqueName: \"kubernetes.io/projected/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-kube-api-access-q6n5b\") pod \"keystone-cron-29490661-m8cw8\" (UID: \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\") " pod="openstack/keystone-cron-29490661-m8cw8" Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.318549 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-fernet-keys\") pod \"keystone-cron-29490661-m8cw8\" (UID: \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\") " pod="openstack/keystone-cron-29490661-m8cw8" Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.318791 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-combined-ca-bundle\") pod \"keystone-cron-29490661-m8cw8\" (UID: \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\") " pod="openstack/keystone-cron-29490661-m8cw8" Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.318847 4844 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-config-data\") pod \"keystone-cron-29490661-m8cw8\" (UID: \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\") " pod="openstack/keystone-cron-29490661-m8cw8" Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.421482 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6n5b\" (UniqueName: \"kubernetes.io/projected/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-kube-api-access-q6n5b\") pod \"keystone-cron-29490661-m8cw8\" (UID: \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\") " pod="openstack/keystone-cron-29490661-m8cw8" Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.421642 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-fernet-keys\") pod \"keystone-cron-29490661-m8cw8\" (UID: \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\") " pod="openstack/keystone-cron-29490661-m8cw8" Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.421767 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-combined-ca-bundle\") pod \"keystone-cron-29490661-m8cw8\" (UID: \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\") " pod="openstack/keystone-cron-29490661-m8cw8" Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.421809 4844 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-config-data\") pod \"keystone-cron-29490661-m8cw8\" (UID: \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\") " pod="openstack/keystone-cron-29490661-m8cw8" Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.429341 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-config-data\") pod \"keystone-cron-29490661-m8cw8\" (UID: \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\") " pod="openstack/keystone-cron-29490661-m8cw8" Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.429647 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-combined-ca-bundle\") pod \"keystone-cron-29490661-m8cw8\" (UID: \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\") " pod="openstack/keystone-cron-29490661-m8cw8" Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.430033 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-fernet-keys\") pod \"keystone-cron-29490661-m8cw8\" (UID: \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\") " pod="openstack/keystone-cron-29490661-m8cw8" Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.443967 4844 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6n5b\" (UniqueName: \"kubernetes.io/projected/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-kube-api-access-q6n5b\") pod \"keystone-cron-29490661-m8cw8\" (UID: \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\") " pod="openstack/keystone-cron-29490661-m8cw8" Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.524465 4844 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490661-m8cw8" Jan 26 15:01:00 crc kubenswrapper[4844]: I0126 15:01:00.967889 4844 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490661-m8cw8"] Jan 26 15:01:00 crc kubenswrapper[4844]: W0126 15:01:00.970038 4844 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f8e728f_b5c3_4905_aa9e_90f4aba7f482.slice/crio-08cc2623f8d02e2048427be5c55c4941d8c2d7ebced931cea30cc585ecacc6cb WatchSource:0}: Error finding container 08cc2623f8d02e2048427be5c55c4941d8c2d7ebced931cea30cc585ecacc6cb: Status 404 returned error can't find the container with id 08cc2623f8d02e2048427be5c55c4941d8c2d7ebced931cea30cc585ecacc6cb Jan 26 15:01:01 crc kubenswrapper[4844]: I0126 15:01:01.587577 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490661-m8cw8" event={"ID":"2f8e728f-b5c3-4905-aa9e-90f4aba7f482","Type":"ContainerStarted","Data":"795201a561a2b16d74dda32ad4356144e3f215cced7484164559754d5a571be4"} Jan 26 15:01:01 crc kubenswrapper[4844]: I0126 15:01:01.587929 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490661-m8cw8" event={"ID":"2f8e728f-b5c3-4905-aa9e-90f4aba7f482","Type":"ContainerStarted","Data":"08cc2623f8d02e2048427be5c55c4941d8c2d7ebced931cea30cc585ecacc6cb"} Jan 26 15:01:01 crc kubenswrapper[4844]: I0126 15:01:01.613566 4844 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490661-m8cw8" podStartSLOduration=1.613543138 podStartE2EDuration="1.613543138s" podCreationTimestamp="2026-01-26 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:01:01.604920689 +0000 UTC m=+8238.538288301" watchObservedRunningTime="2026-01-26 15:01:01.613543138 +0000 UTC m=+8238.546910770" Jan 26 15:01:05 crc kubenswrapper[4844]: I0126 15:01:05.639466 4844 generic.go:334] "Generic (PLEG): container finished" podID="2f8e728f-b5c3-4905-aa9e-90f4aba7f482" containerID="795201a561a2b16d74dda32ad4356144e3f215cced7484164559754d5a571be4" exitCode=0 Jan 26 15:01:05 crc kubenswrapper[4844]: I0126 15:01:05.639633 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490661-m8cw8" event={"ID":"2f8e728f-b5c3-4905-aa9e-90f4aba7f482","Type":"ContainerDied","Data":"795201a561a2b16d74dda32ad4356144e3f215cced7484164559754d5a571be4"} Jan 26 15:01:07 crc kubenswrapper[4844]: I0126 15:01:07.146053 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490661-m8cw8" Jan 26 15:01:07 crc kubenswrapper[4844]: I0126 15:01:07.281446 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-config-data\") pod \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\" (UID: \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\") " Jan 26 15:01:07 crc kubenswrapper[4844]: I0126 15:01:07.281907 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6n5b\" (UniqueName: \"kubernetes.io/projected/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-kube-api-access-q6n5b\") pod \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\" (UID: \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\") " Jan 26 15:01:07 crc kubenswrapper[4844]: I0126 15:01:07.282205 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-fernet-keys\") pod \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\" (UID: \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\") " Jan 26 15:01:07 crc kubenswrapper[4844]: I0126 15:01:07.282453 4844 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-combined-ca-bundle\") pod \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\" (UID: \"2f8e728f-b5c3-4905-aa9e-90f4aba7f482\") " Jan 26 15:01:07 crc kubenswrapper[4844]: I0126 15:01:07.288457 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2f8e728f-b5c3-4905-aa9e-90f4aba7f482" (UID: "2f8e728f-b5c3-4905-aa9e-90f4aba7f482"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:01:07 crc kubenswrapper[4844]: I0126 15:01:07.310213 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-kube-api-access-q6n5b" (OuterVolumeSpecName: "kube-api-access-q6n5b") pod "2f8e728f-b5c3-4905-aa9e-90f4aba7f482" (UID: "2f8e728f-b5c3-4905-aa9e-90f4aba7f482"). InnerVolumeSpecName "kube-api-access-q6n5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:01:07 crc kubenswrapper[4844]: I0126 15:01:07.333286 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f8e728f-b5c3-4905-aa9e-90f4aba7f482" (UID: "2f8e728f-b5c3-4905-aa9e-90f4aba7f482"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:01:07 crc kubenswrapper[4844]: I0126 15:01:07.353193 4844 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-config-data" (OuterVolumeSpecName: "config-data") pod "2f8e728f-b5c3-4905-aa9e-90f4aba7f482" (UID: "2f8e728f-b5c3-4905-aa9e-90f4aba7f482"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:01:07 crc kubenswrapper[4844]: I0126 15:01:07.385229 4844 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 15:01:07 crc kubenswrapper[4844]: I0126 15:01:07.385272 4844 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:01:07 crc kubenswrapper[4844]: I0126 15:01:07.385292 4844 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:01:07 crc kubenswrapper[4844]: I0126 15:01:07.385308 4844 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6n5b\" (UniqueName: \"kubernetes.io/projected/2f8e728f-b5c3-4905-aa9e-90f4aba7f482-kube-api-access-q6n5b\") on node \"crc\" DevicePath \"\"" Jan 26 15:01:07 crc kubenswrapper[4844]: I0126 15:01:07.659716 4844 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490661-m8cw8" event={"ID":"2f8e728f-b5c3-4905-aa9e-90f4aba7f482","Type":"ContainerDied","Data":"08cc2623f8d02e2048427be5c55c4941d8c2d7ebced931cea30cc585ecacc6cb"} Jan 26 15:01:07 crc kubenswrapper[4844]: I0126 15:01:07.659771 4844 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08cc2623f8d02e2048427be5c55c4941d8c2d7ebced931cea30cc585ecacc6cb" Jan 26 15:01:07 crc kubenswrapper[4844]: I0126 15:01:07.659831 4844 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490661-m8cw8" Jan 26 15:01:36 crc kubenswrapper[4844]: I0126 15:01:36.365451 4844 patch_prober.go:28] interesting pod/machine-config-daemon-j7r9j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:01:36 crc kubenswrapper[4844]: I0126 15:01:36.366223 4844 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j7r9j" podUID="e3602fc7-397b-4d73-ab0c-45acc047397b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"